Environmental Indicators

Better Coordination Is Needed to Develop Environmental Indicator Sets That Inform Decisions Gao ID: GAO-05-52 November 17, 2004

Environmental indicator sets assemble quantitative measures of conditions and trends (known as indicators) to assess the state of the environment and natural resources and to gauge progress toward specific goals. Such sets are now being developed to bridge the gap between needed and available information and to prioritize further data collection. The widespread development and use of environmental indicator sets has led federal and nonfederal entities to consider the benefits such sets provide when measuring performance and improving oversight of environmental programs. In this context, GAO was asked to identify (1) the purposes for which federal and nonfederal organizations are developing and using environmental indicator sets, and how they are being used; and (2) the major challenges facing the development and use of environmental indicator sets.

GAO identified the purposes for developing environmental indicator sets and major challenges facing their development and use to inform decisions by interviewing key experts, surveying developers and users, and studying eight major indicator sets. GAO found that federal and nonfederal organizations develop environmental indicator sets for several purposes, including assessing conditions and trends, communicating complex issues, and supporting performance management activities. Some environmental indicator sets are limited to use within specific political jurisdictional boundaries, while others are confined to specific natural areas, such as watersheds, lake basins, or ecosystems. Similarly, some sets address specific resources, such as water quality or land use, while others focus on quality of life issues or sustainable development. The indicator sets GAO reviewed are primarily used to assist in strategic planning efforts, communicate complex environmental issues, and track progress toward environmental goals. Environmental indicator set developers, both federal and nonfederal, commonly face several major challenges. Such challenges include ensuring that a sound, balanced process is used to develop indicators, which can require a resource-intensive effort to address the needs of potential users. Similarly, obtaining sufficient data on environmental conditions and trends and their causes is particularly problematic. Another key challenge in developing useful environmental indicator sets involves coordinating and integrating the various related federal and other indicator sets in order to advance knowledge about the environment. In this regard, the efforts of the Council on Environmental Quality's (CEQ) Interagency Working Group on Indicator Coordination are promising, but they lack the long-term, stable institutional arrangements needed to ensure continued guidance and coordination of federal activity in this area. Moreover, indicator sets designed to link management activities, environmental and natural resource conditions and trends, and human and ecological health have difficulty because many such relationships are not well understood. To that end, the Environmental Protection Agency's (EPA) continuing work to develop indicators to assist the agency's efforts to manage for results highlights this challenge. While EPA has made progress, its efforts to better understand such relationships over many years have been hampered not only by technical difficulties in establishing linkages between program activities and changes in the environment, but also by changes in leadership within the agency and the absence of a systematic approach, including clear expectations, milestones, and designated resources. Such institutional arrangements would enable the agency's senior management, Congress, and other stakeholders to monitor and assist EPA's efforts toward a complete and periodically updated Report on the Environment.

Recommendations

Our recommendations from this work are listed below with a Contact for more information. Status will change from "In process" to "Open," "Closed - implemented," or "Closed - not implemented" based on our follow up work.

Director: Team: Phone:


GAO-05-52, Environmental Indicators: Better Coordination Is Needed to Develop Environmental Indicator Sets That Inform Decisions This is the accessible text file for GAO report number GAO-05-52 entitled 'Environmental Indicators: Better Coordination Is Needed to Develop Environmental Indicator Sets That Inform Decisions' which was released on November 17, 2004. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. Report to Congressional Requesters: November 2004: ENVIRONMENTAL INDICATORS: Better Coordination Is Needed to Develop Environmental Indicator Sets That Inform Decisions: [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-05-52]: GAO Highlights: Highlights of GAO-05-52, a report to congressional requesters: Why GAO Did This Study: Environmental indicator sets assemble quantitative measures of conditions and trends (known as indicators) to assess the state of the environment and natural resources and to gauge progress toward specific goals. Such sets are now being developed to bridge the gap between needed and available information and to prioritize further data collection. The widespread development and use of environmental indicator sets has led federal and nonfederal entities to consider the benefits such sets provide when measuring performance and improving oversight of environmental programs. In this context, GAO was asked to identify (1) the purposes for which federal and nonfederal organizations are developing and using environmental indicator sets, and how they are being used; and (2) the major challenges facing the development and use of environmental indicator sets. What GAO Found: GAO identified the purposes for developing environmental indicator sets and major challenges facing their development and use to inform decisions by interviewing key experts, surveying developers and users, and studying eight major indicator sets. GAO found that federal and nonfederal organizations develop environmental indicator sets for several purposes, including assessing conditions and trends, communicating complex issues, and supporting performance management activities. Some environmental indicator sets are limited to use within specific political jurisdictional boundaries, while others are confined to specific natural areas, such as watersheds, lake basins, or ecosystems. Similarly, some sets address specific resources, such as water quality or land use, while others focus on quality of life issues or sustainable development. The indicator sets GAO reviewed are primarily used to assist in strategic planning efforts, communicate complex environmental issues, and track progress toward environmental goals. Environmental indicator set developers, both federal and nonfederal, commonly face several major challenges. Such challenges include ensuring that a sound, balanced process is used to develop indicators, which can require a resource-intensive effort to address the needs of potential users. Similarly, obtaining sufficient data on environmental conditions and trends and their causes is particularly problematic. Another key challenge in developing useful environmental indicator sets involves coordinating and integrating the various related federal and other indicator sets in order to advance knowledge about the environment. In this regard, the efforts of the Council on Environmental Quality‘s (CEQ) Interagency Working Group on Indicator Coordination are promising, but they lack the long-term, stable institutional arrangements needed to ensure continued guidance and coordination of federal activity in this area. Moreover, indicator sets designed to link management activities, environmental and natural resource conditions and trends, and human and ecological health have difficulty because many such relationships are not well understood. To that end, the Environmental Protection Agency‘s (EPA) continuing work to develop indicators to assist the agency‘s efforts to manage for results highlights this challenge. While EPA has made progress, its efforts to better understand such relationships over many years have been hampered not only by technical difficulties in establishing linkages between program activities and changes in the environment, but also by changes in leadership within the agency and the absence of a systematic approach, including clear expectations, milestones, and designated resources. Such institutional arrangements would enable the agency‘s senior management, Congress, and other stakeholders to monitor and assist EPA‘s efforts toward a complete and periodically updated Report on the Environment. What GAO Recommends: GAO recommends that the Chair of CEQ develop institutional arrangements needed to ensure a concerted, systematic, and stable approach to the development, coordination, and integration of environmental indicator sets. Moreover, GAO recommends that the EPA Administrator establish clear lines of responsibility and accountability and identify specific requirements for developing and using indicators. CEQ and EPA generally agreed with GAO‘s recommendations. www.gao.gov/cgi-bin/getrpt?GAO-05-52. To view the full product, including the scope and methodology, click on the link above. For more information, contact John B. Stephenson at (202) 512-6225 or stephensonj@gao.gov. [End of section] Contents: Letter: Results in Brief: Background: Environmental Indicator Sets Are Developed for a Variety of Purposes, and Users Generally Report Positive Impacts: Major Challenges Facing the Development and Use of Environmental Indicator Sets: Conclusions: Recommendations for Executive Action: Agency Comments and Our Evaluation: Appendixes: Appendix I: Objectives, Scope, and Methodology: Compendium of Environmental Indicator Sets: Survey of Practitioners: Case Study: Meeting of Experts Convened by the National Academy of Sciences: Appendix II: Key Environmental Indicator Initiatives Identified by Experts: Appendix III: Environmental Indicator Set Case Study Profiles: The Heinz Center's State of the Nation's Ecosystems: EPA's National Coastal Assessment: Chesapeake Bay Program: Great Lakes State of the Lakes Ecosystem Conference: Minnesota Department of Natural Resources' Strategic Conservation Agenda: Environmental Protection Indicators for California: Quality of Life Indicator Set, Jacksonville, Florida: Environmental Indicators Project, West Oakland, California: Appendix IV: Selected Activities Identifying Need for More Comprehensive Environmental Information: Appendix V: Environmental Reporting by Private and Public Organizations: Appendix VI: Accounting for the Environment: Appendix VII: The Uncertain Cost of Environmental Information: Appendix VIII: Selected Options: Appendix IX: Comments from the Council on Environmental Quality: Appendix X: Comments from the Department of the Interior: Appendix XI: GAO Contact and Staff Acknowledgments: GAO Contact: Staff Acknowledgments: Bibliography: Related GAO Products: Tables: Table 1: Selected Major Environmental Research and Monitoring Networks and ProgramsA: Table 2: Ten Challenges Most Frequently Cited as Major or Moderate by Survey Respondents: Table 3: Ten Criteria Used to Select Indicators Most Frequently Cited by Survey Respondents: Table 4: Sufficiency of Current Environmental Data to Support Three Major National Indicator Sets: Table 5: Summary of Survey Participants: Table 6: Environmental Indicator Sets Selected for Case Study Review: Table 7: Major Pieces of Legislation to Address Federal Environmental Data and Indicator Issues, 1970-2004: Table 8: Selected Congressional Hearings Addressing Federal Environmental Data and Indicator Management Issues, 1970-2004: Table 9: Selected Academic Reports Addressing Federal Environmental Data and Indicator Management Issues: Table 10: Direct Funding for Major Environment, Energy, and Natural Resources Statistical Programs: Figures: Figure 1: Nitrate Load Carried by Major Rivers: Figure 2: Historical Wildfires in California, 1950 to 1997: Figure 3: Ten Purposes for the Development of Environmental Indicator Sets Most Frequently Cited by Survey Respondents: Figure 4: Selected Activities Identifying Need for More Comprehensive Environmental Information: Figure 5: The State of the Nation's Ecosystems Report: Figure 6: Draft National Coastal Condition Report II: Figure 7: The State of the Chesapeake Bay Report: Figure 8: State of the Great Lakes Report: Figure 9: The Strategic Conservation Agenda Report: Figure 10: Environmental Protection Indicators for California Report: Figure 11: Jacksonville's 2003 Quality of Life Progress Report: Figure 12: West Oakland's Neighborhood Knowledge for Change Report: Abbreviations: BEA: Bureau of Economic Analysis: Cal/EPA: California Environmental Protection Agency: CEQ: Council on Environmental Quality: CSERA: Canadian System of Environmental and Resource Accounts: EII: Environmental Indicators Initiative: EIP: West Oakland Environmental Indicators Project: EMAP: Environmental Monitoring and Assessment Program: EPA: Environmental Protection Agency: EPIC: Environmental Protection Indicators for California: GEOSS: Global Earth Observation System of Systems: GFT 250: Global Fortune Top 250 international companies: GPRA: Government Performance and Results Act: IEESA: Integrated Economic and Environmental Satellite Accounts: JCCI: Jacksonville Community Council Inc. NAMEA: National Accounting Matrix including Environmental Accounts: NAS: National Academy of Sciences: NCA: National Coastal Assessment: NEPA: National Environmental Policy Act: NOAA: National Oceanic and Atmospheric Administration: NSTC: National Science and Technology Council: OMB: Office of Management and Budget: ORD: EPA's Office of Research and Development: OSTP: Office of Science and Technology Policy: PART: Performance Assessment Rating Tool: PSR: pressure-state-response model: SCA: Strategic Conservation Agenda: SOLEC: Great Lakes State of the Lakes Ecosystem Conference: TRI: Toxic Release Inventory: USDA: U.S. Department of Agriculture: USGS: U.S. Geological Survey: Letter November 17, 2004: The Honorable Sherwood L. Boehlert: Chairman, Committee on Science: House of Representatives: The Honorable Vernon J. Ehlers: Chairman: The Honorable Mark Udall: Ranking Minority Member: Subcommittee on Environment, Technology and Standards: Committee on Science: House of Representatives: Comprehensive and reliable information on the nation's environment and natural resources is a cornerstone of effective environmental management and an integral part of a national strategy to anticipate and address problems. Governments, businesses, and citizens depend on relevant, accurate, and timely federal data and statistics to make informed decisions about a range of environmental issues--including evaluating the performance of environmental programs, aligning the efficiency of markets with environmental protection, assessing the state of the environment and natural resources, and identifying emerging issues and options for action. Although data and statistics are rarely the sole factors that determine how society should address any given issue, reliable scientific information is essential to support the assessment of various alternatives and inform policy decisions. Federal environmental monitoring and data collection activities provide critical inputs into the assessment process, and their planning and implementation must be linked to assessment and policy needs. The individual environmental problems that have been given much attention to date have given way to a growing realization of the overwhelming degree of interaction among the environmental, economic, and social sectors, and the degree to which the consequences of these interactions are cumulative, unpredictable, and--in many cases--difficult to repair. Developing an integrated understanding of such threats and the options for dealing with them is a central challenge for the nation. Moreover, the federal government relies on this information base to assess progress toward national goals as laid out in legislation and to improve and better account for its performance. In recent years, a general consensus has developed on the need to judge the success of the nation's environmental policies against environmental quality outcomes, rather than the number of management plans created, regulations or permits issued, or enforcement actions taken. The adoption of such a performance-based environmental policy, however, has been hampered by the lack of reliable scientific information on environmental conditions and trends. Federal agencies collect and manage a tremendous volume of environmental data at a significant cost. The federal government spends at least $600 million each year on monitoring conditions and trends of the nation's natural and environmental resources.[Footnote 1] Numerous federal--and, in some cases, regional, state, or local--organizations conduct environmental research and monitoring programs using a variety of methods to address specific problems under different legislative authorizations, such as the Clean Air Act.[Footnote 2] Such activities can yield tangible and far-reaching benefits. For example, the National Oceanic and Atmospheric Administration (NOAA) produces climate forecasts based on data collected through satellites, ocean buoys, and other data collection activities that are often economically valuable because they give the public time and incentive to act to reduce weather-and climate-related losses. In one case, NOAA's forecast enabled residents of California to avoid an estimated $1.1 billion in damages during storms in the winter of 1997-'98, according to the agency. However, adequate information is not always in place to help Congress or others determine how well the environment is doing, judge existing environmental policies, or develop sensible new ones. The nation's environmental data collection and monitoring systems were never intended to be comprehensive for all natural and environmental resource issues nationwide. A comprehensive picture of the nation's environmental and natural resources is not yet possible. Numerous public and private initiatives are now developing sets of environmental indicators to bridge the gap between needed and available information and to prioritize further data collection. Environmental indicator sets assemble quantitative measures of conditions and trends to assess the state of the environment and natural resources and to gauge progress toward specific goals. In general, indicator sets are designed to provide environmental decision makers and the public with comprehensible information to assist developing strategic plans, setting priorities, and assessing which programs are, or are not, working well. The widespread development and use of environmental indicator sets has led Congress, federal agencies, states, local communities, and corporations to consider the possible uses for sets of environmental indicators, such as for measuring performance and improving oversight of environmental programs. In this context, you asked us to examine (1) the purposes for which federal and nonfederal organizations are developing and using environmental indicator sets, and how they are being used; and (2) the major challenges facing the development and use of environmental indicator sets. In addressing these objectives, we performed multiple lines of work: * To identify a list of environmental indicator sets, we elicited the help of experts on environmental indicator set development. After conducting extensive Web and literature searches, conducting multiple background interviews, and following up with contacts made at professional conferences, we identified 48 experts. We then distributed a data collection instrument to these experts, asking them to specify (1) environmental indicator sets with which they were familiar that either were being developed or had been developed in the past 10 years, (2) states that had led or were leading the effort in developing and using environmental indicator sets, and (3) a contact person for each set. Twenty-three of the 48 experts responded. After combining duplicate responses and eliminating the responses that either did not meet our definition of an indicator set or could not be associated with enough information to locate a specific initiative, we developed a pool of 87 environmental indicator sets identified by experts that formed the basis for this review (see app. II for the complete list). * To develop a list of environmental indicator set developers and users (or "practitioners") for GAO to survey, we spoke with representatives of each of the 87 indicator sets identified by experts and asked them to name a developer and a user to participate in our survey. This process yielded 87 practitioners, who collectively represented 52 of the indicator sets identified by experts. We then surveyed the practitioners, asking them to identify the primary characteristics of the indicator set, how it was developed, and how the set was being used. Forty-nine of the 87 practitioners responded to our survey. * To gain a better understanding of the mechanics of environmental indicator set development and use, we conducted in-depth case studies on 8 of the 87 identified environmental indicator sets: two sets for each of four geographic scales--national, regional, state, and local. We selected the sets on the basis of their perceived level of maturity (current and active) and the availability and accessibility of individuals involved in their development and use. We conducted semistructured interviews that allowed practitioners the opportunity to supply information on a wide range of issues relating to their involvement with the development and use of the environmental indicator set. We also reviewed relevant documents that pertained to the development and use of each of the environmental indicator sets. Based on the information gathered, we then drafted case study profiles and provided them to the appropriate program manager for review and comment (see app. III). * To assess the current status of environmental indicator sets and their impact on policy decisions, we contracted with the National Academy of Sciences (NAS) to convene a 2-day meeting of selected authorities with expertise in the interaction between science and policy making and who were familiar with indicator set development and use. NAS staff helped us identify a pool of authorities from which we selected 26 who collectively provided the meeting with a balance of expertise, interdisciplinary knowledge, and cross-jurisdictional representation. The meeting centered on discussions of three broad topics: (1) the organizations developing environmental indicator sets and the impact of these sets across the nation; (2) significant challenges facing the development and use of environmental indicator sets; and (3) what remedies, if any, existed to confront or mitigate these challenges. In developing our findings, we corroborated the evidence gathered across these lines of work. A more detailed description of our scope and methodology is presented in appendix I. The findings in this report are not intended to apply to all environmental indicator sets. General references to indicator sets in this report refer to the 47 sets that we reviewed in detail--the 8 case studies and the 39 sets represented in our survey results. Furthermore, we did not independently assess the reliability of the data used in the 47 sets we reviewed because those data were not material to our findings. We conducted our work from June 2003 to October 2004 in accordance with generally accepted government auditing standards. Results in Brief: Federal and nonfederal organizations are developing and using environmental indicator sets for assessing conditions and trends, communicating complex issues, and supporting performance management activities. Various organizations in the United States--including government agencies, nonprofit groups, universities, and corporations- -have developed hundreds of environmental indicator sets. Some environmental indicator sets we reviewed are limited to political jurisdiction, such as county, state, or nation, while others are restricted to natural areas, such as watersheds, lake basins, or ecosystems. Some address specific resources, such as water quality or land use, while others focus on quality of life issues or sustainable development. For instance, the indicators reported through the Great Lakes State of the Lakes Ecosystem Conferences encompass the entire Great Lakes watershed--including aquatic, coastal, and terrestrial components, as well as human health and societal issues. In other cases, cities such as New Orleans, Pittsburgh, and Seattle have developed indicator sets that focus on broader issues that include economic prosperity, social equity, and environmental quality to measure and sustain the quality of life for the citizens in the community. The indicator sets we reviewed are primarily used to assist in strategic planning efforts, communicate complex environmental issues, and track progress toward environmental goals. Whereas many challenges that inhibit the development of useful sets of environmental indicators are unique to the individual sets being developed, developers face several common challenges. Indicator set developers reported the following common challenges: * Ensuring that a sound process is used to develop the indicator sets. Developers reported that creating an indicator set can be an intensely political process that challenges both the credibility and relevance of a set. Indicator sets we reviewed largely relied on collaborative processes to balance the various interests. Such processes define the purpose and intended use of the indicator set, determine the conceptual models--sets of qualitative assumptions to describe social, organizational, and natural systems--and criteria for selecting indicators, and selecting the indicators themselves. Such processes are difficult to manage, but essential to ensure that a set is ultimately accepted and used. * Obtaining sufficient environmental data to report conditions and trends related to the indicators selected. Obtaining data for use in indicator sets can be difficult largely because long-standing limitations of federal environmental monitoring and data collection activities have not been resolved. Over half of the respondents to our survey identified obtaining data of sufficient quality as a major challenge to developing indicator sets. Indicator set developers and other experts noted that obtaining sufficient data on environmental conditions is difficult and costly because the many different organizations that collect data on the nation's environment and natural resources do so for specific purposes in different forms or on different geographic scales, and thus cannot be readily integrated to support indicators. Sharing such data can have significant, and sometimes prohibitive, costs because transforming the data to suit the needs of another user would require data managers to divert already limited resources from other projects. Moreover, past GAO work has emphasized that the federal government's current environmental information base suffers from data gaps between what is monitored and what needs to be monitored. Because of problems filling gaps in existing data and difficulties in integrating data from different databases, indicator set developers' efforts to identify data of sufficient quality from existing data sources has met with limited success. * Coordinating and integrating various related indicator sets to develop a more comprehensive understanding of the environment. Experts we interviewed noted that the federal government lacks an organizational framework to provide a consistent basis for working with international, state, or nongovernmental indicator initiatives. Federal environmental indicator set developers employ a wide range of approaches. As a result, significant analytical and technical differences inhibit integration of related sets or synthesis of the diverse range of sets to draw a comprehensive picture of the nation's environment. Recognizing the need for coordination at the highest levels, the White House Council on Environmental Quality (CEQ) established an Interagency Working Group on Indicator Coordination in 2002 to coordinate and integrate the federal investment in environmental indicator sets. According to officials, the Working Group was created as an ad hoc organization within the Executive Office of the President, operating without explicit responsibility and authority to ensure the continued and full involvement, cooperation, and resources from other federal agencies. Officials of agencies participating in the Working Group acknowledge the need for a more stable structure with the authority and resources necessary to achieve the Working Group's goals. On the basis of our discussions, we believe that a number of organizational options exist and should be studied to determine the most appropriate option or combination of options for implementing key functions, such as guiding and coordinating the development and use of environmental indicators. * Linking specific environmental management actions and program activities to changes in environmental conditions and trends. Developers assembling environmental indicator sets to improve the performance of environmental management programs reported difficulty (1) accounting for relationships between management actions and other factors beyond the agency's control that can potentially affect environmental changes and (2) addressing the time lag between management actions and achieved results. Such problems are consistent with GAO's work on performance measurement in general, and the Environmental Protection Agency (EPA) in particular. Since our 1988 report on EPA's management, GAO has stressed numerous times that EPA place priority on developing indicators to guide the agency's priority setting, strategic planning, and resource allocation. EPA's Environmental Indicators Initiative illustrates the difficulties in developing a set of national environmental indicators useful for establishing priorities, allocating resources, and assessing results. Past efforts to develop and use environmental indicators by the agency underscore both the importance and difficulty of doing so, and the need for a focused, long-term commitment as changes occur in the agency's senior management and priorities. These previous efforts have been hindered not only by technical difficulties in establishing linkages between program activities and changes in the environment, but also by changes in leadership within the agency and the lack of needed resources for monitoring the natural resources and the environment. Although a noteworthy step, EPA's effort thus far has not functioned as a key component of an agencywide comprehensive approach for managing EPA's work to achieve measurable results. EPA has not initiated or planned an institutional framework with clear lines of responsibility and accountability for developing and using environmental indicators, and no processes, procedures, or work plans exist to link the results of the initiative with EPA's strategic planning and performance reporting cycle. In order to provide a comprehensive picture of environmental and natural resource conditions and trends to assess the nation's position and progress, we recommend that the Chairman of CEQ develop institutional arrangements needed to ensure a concerted, systematic, and stable approach to address the challenges associated with the development, coordination, and integration of environmental indicator sets. Furthermore, building on EPA's initial efforts on indicators and to evaluate the purposes that indicators might serve, we recommend that the EPA Administrator establish clear lines of responsibility and accountability among EPA's various organizational components and identify specific requirements for developing and using environmental indicators. Background: Environmental indicators track changes to the quality and condition of the air, water, land, and ecosystems on various geographic scales, and related human health and economic conditions. Whereas definitions of "environmental indicator" vary, most of them emphasize that an environmental indicator is a selected quantifiable variable that describes, analyzes, and presents scientific information and its significance. Public and private initiatives assemble sets of indicators to address a variety of environmental issues. Federal agencies, private corporations, local communities, and others develop environmental indicator sets to condense complex topics or concepts, such as the health of ecosystems, into a manageable amount of meaningful information. Indicators are presented in statistical or graphical form, but are also referred to as concepts that have meaning beyond just the numeric value of the metric because of the importance of the phenomenon or element of a natural system being measured within the developers' worldview. For example, figure 1 presents the volume of nitrate carried by major rivers ("nitrate load") per year since the mid-1950s. Scientists generally accept this measure as an indicator of the condition of the nation's freshwater system, which, in turn, is a component of the health of ecosystems in the United States. Figure 1: Nitrate Load Carried by Major Rivers: [See PDF for image] [End of figure] Similarly, figure 2 shows an indicator drawn from a set of indicators addressing state-level environmental protection efforts. This indicator presents trend data on the extent of wildfires in California since 1950 as one measure to be used for gauging the performance of state programs to restore forest health. Figure 2: Historical Wildfires in California, 1950 to 1997: [See PDF for image] [End of figure] Organizations have developed and used indicator sets to address a broad array of economic, social, and environmental issues.[Footnote 3] For example, the Healthy People initiative, led by the Department of Health and Human Services, has worked since 1979 to develop a comprehensive set of national objectives for disease prevention and health promotion, and indicators with which to measure them. Healthy People has continued to be revised once every decade since 1980. Furthermore, economic indicator sets have been used to enhance understanding of economic phenomena, such as the business cycle. Economists generally agree that regular and consistent reporting of economic indicators such as unemployment, coupled with short explanations and extended discussion about the causes and consequences of the trends, has supported the development of economic theories and models and informed decision making in many institutions. However, as the National Research Council reported in 2000, while there are many well-known economic indicators, no current environmental indicators have achieved such status--although some environmental indicators, such as sea surface temperature as an indicator of global climate change, have begun to attract considerable attention. While much of the development of national indicators in the United States has focused on specific economic, social, and environmental concerns, the importance of interrelationships among these dimensions is growing. For example, there is a steady trend today to broaden and integrate various types of information used in decisionmaking contexts throughout society. The trend includes incorporating environmental and social measures into the regular reporting of economic measures by private corporations (see app. V) and linking environmental information to the information contained in the national economic accounts (see app. VI). Striving to understand the impact that human society has on the environment involves focusing on the interrelationships among economic, social, and environmental processes. Environmental indicator sets are built upon a vast patchwork of environmental information. Federal agencies collect and manage a tremendous volume of environmental data at a cost of at least $600 million each year. Across the United States, state, nonprofit, and private organizations also collect and manage research and monitoring data that feed into federal databases. Federal and nonfederal organizations collect such information to address specific problems under a variety of authorities using various research designs and methodologies, definitions, collection frequencies, and sites as determined by the collection agencies. As shown in table 1, numerous federal agencies are involved in key federal environmental research and monitoring programs, under a variety of legal authorities. Federal environmental monitoring and data collection activities provide critical feedback on the state of the nation's environment. Table 1: Selected Major Environmental Research and Monitoring Networks and ProgramsA: Program name: Coastal Change Analysis Program; Primary federal agencies: NOAA; Primary authority: NOAA Authorization Act of 1992. Program name: Gap Analysis Program; Primary federal agencies: USGS; Primary authority: Fish and Wildlife Coordination Act. Program name: National Wetlands Inventory; Primary federal agencies: FWS; Primary authority: Emergency Wetland Resources Act of 1986. Program name: Breeding Bird Survey; Primary federal agencies: USGS; Primary authority: Migratory Bird Treaty Act. Program name: Clean Air Status and Trends Network; Primary federal agencies: EPA; Primary authority: Clean Air Act Amendments of 1990. Program name: Environmental Monitoring and Assessment Program; Primary federal agencies: EPA; Primary authority: Clean Water Act. Program name: Forest Health Monitoring; Primary federal agencies: EPA and Forest Service; Primary authority: Forest Ecosystem and Atmospheric Pollution Research Act of 1988. Program name: Forest Inventory Analysis; Primary federal agencies: Forest Service; Primary authority: Forest and Rangeland Renewable Resources Research Act of 1978, as amended by the Agricultural Research, Extension, and Education Reform Act of 1998. Program name: National Atmospheric Deposition Program/National Trends Network; Primary federal agencies: USGS; Primary authority: Clean Air Act Amendments of 1990. Program name: National Air Monitoring System/State and Local/ Photochemical Air Monitoring System; Primary federal agencies: EPA; Primary authority: Clean Air Act Amendments of 1977. Program name: National Stream Quality Accounting Network; Primary federal agencies: USGS; Primary authority: USGS Organic Act. Program name: National Stream Gauging Network; Primary federal agencies: USGS; Primary authority: USGS Organic Act. Program name: National Resources Inventory; Primary federal agencies: NRCS; Primary authority: Rural Development Act of 1972. Program name: National Status and Trends; Primary federal agencies: NOAA; Primary authority: Marine Protected Resources and Sanctuaries Act of 1972, as amended. Program name: NMFS marine mammal stock assessments; Primary federal agencies: NOAA and NMFS; Primary authority: Marine Mammal Protection Act Amendments of 1994. Program name: Remote Automated Weather System; Primary federal agencies: Multiagency; Primary authority: Federal agency land management authorities. Program name: Snowpack Telemetry; Primary federal agencies: NRCS; Primary authority: Pub. L. No. 74-46. Program name: Agricultural Research Service; Primary federal agencies: USDA; Primary authority: USDA research authorities (e.g., 7 U.S.C. 1010). Program name: Forest and rangeland sites; Primary federal agencies: USDA; Primary authority: Forest and Rangeland Renewable Resources Research Act of 1978. Program name: Long-term ecological research; Primary federal agencies: NSF; Primary authority: National Science Foundation Act of 1950, as amended. Program name: National Park Ecological Monitoring Program; Primary federal agencies: NPS; Primary authority: NPS Organic Act. Program name: Coastal Ocean Program; Primary federal agencies: NOAA; Primary authority: NOAA Authorization Act of 1992. Program name: National Marine Sanctuary; Primary federal agencies: NOAA; Primary authority: National Marine Sanctuaries Amendments Act of 2000. Program name: Hydrologic Benchmark Network; Primary federal agencies: USGS; Primary authority: USGS Organic Act. Program name: National Water Quality Assessment; Primary federal agencies: USGS; Primary authority: USGS Organic Act. Program name: Water, Energy, and Biogeochemical Budgets Program; Primary federal agencies: USGS; Primary authority: Global Change Research Act of 1990. Source: GAO analysis of National Science and Technology Council data. [A] Networks and programs in this list were drawn from an inventory originally reported by the National Science and Technology Council in 1997. [End of table] Although extensive, the environmental information base in the United States does not support comprehensive environmental and natural resource assessments. In 1997, the National Science and Technology Council (NSTC)--a Cabinet-level council that serves as the principal means for the president to coordinate research and development across federal agencies--evaluated the status of federal agency environmental monitoring and research activities and found that monitoring programs do not provide integrated data across multiple natural resources at the various scales needed to develop policies that take into account current scientific understanding. The NSTC called for a strategy for environmental monitoring and research to enable comprehensive assessments.[Footnote 4] More recently, the National Council for Science and the Environment--a nonprofit organization addressing the scientific basis for environmental decision making--convened a national conference of more than 450 scientists, policymakers, and academicians in December 2000 that underscored the need for comprehensive national assessments.[Footnote 5] Until 2000, the White House Council on Environmental Quality (CEQ) was required to transmit an annual environmental quality report to Congress. Although the annual reporting requirement is no longer in effect, CEQ is still required to accumulate the necessary data and other information needed for a continuing analysis of changes and trends in the natural environment and an interpretation of their underlying causes.[Footnote 6] Whereas scientists, agency officials, and academicians generally agree on the need for periodic reporting of conditions and trends of environmental and natural resources, no consensus has been reached on who should be responsible for this task or how it would be best achieved. The federal government relies on such trend information to assess progress toward national goals and to improve and better account for its performance, but credible and reliable information cannot always be obtained. In recent years, a general consensus has developed on the need to judge the success of the nation's environmental policies against environmental quality outcomes, rather than the number of management plans created, regulations or permits issued, or enforcement actions taken. The Government Performance and Results Act (GPRA)--the centerpiece of a statutory and management framework laid out in the 1990s as the foundation for strengthening government performance and accountability--is designed to inform congressional and executive decision making by providing objective information on the relative effectiveness and efficiency of federal programs and spending. GPRA requires both a connection to the structures used in congressional budget presentations and consultation between the executive and legislative branches on agency strategic plans to ensure Congress an oversight stake in GPRA's success. The current administration has made the integration of performance and budget information one of five governmentwide management priorities under the President's Management Agenda. Central to this initiative is the Program Assessment Rating Tool (PART). The Office of Management and Budget developed PART as a diagnostic tool meant to provide a consistent approach to evaluating federal programs and as one tool applied it in formulating the executive branch's fiscal years 2004 and 2005 budget requests. The adoption of such a performance-based environmental policy, however, has been hampered by the lack of reliable scientific information on environmental conditions and trends. Environmental Indicator Sets Are Developed for a Variety of Purposes, and Users Generally Report Positive Impacts: Government agencies, universities, corporations, and other organizations have developed environmental indicator sets to address environmental issues on various geographic scales. Most of the environmental indicator sets we reviewed were developed for a myriad of purposes, including assessing environmental conditions and trends, raising public awareness, communicating complex issues, and tracking progress toward goals. Indicator set users reported that such sets generally had positive impacts, and were especially useful in assessing environmental conditions and trends, communicating complex environmental issues, and developing strategic plans. However, it is difficult to determine the benefits that arise from these impacts. Organizations Develop Environmental Indicator Sets for Specific but Varied Purposes: Various organizations throughout the United States--including government agencies at national, state, and local levels; nonprofit groups; universities; and corporations--have developed hundreds of environmental indicator sets in recent years to address environmental issues on a variety of geographic scales. Some environmental indicator sets are limited to political jurisdiction, such as county, state, or nation; others are limited to natural areas, such as watersheds, lake basins, or ecosystems. Many environmental indicator sets address complex, crosscutting issues--such as ecosystem health--that are affected by environmental, economic, and social factors. For instance, the Great Lakes Water Quality Agreement calls for the development of a set of about 80 ecosystem health indicators for the Great Lakes to inform the public and report progress toward achieving the objectives of the agreement. Indicators address specific geographic zones of the entire Great Lakes Basin ecosystem--such as offshore, nearshore, coastal wetlands, and shoreline--and other issues such as human health, land use, and societal well-being. The indicator list is continually evolving. Every 2 years, Environment Canada--the Canadian agency primarily responsible for the preservation and enhancement of the quality of the natural environment--and EPA host a review and discussion of the indicators as required under the agreement, either at the State of the Lakes Ecosystem Conference or through alternate processes. Moreover, some cities, such as New Orleans, Pittsburgh, and Seattle, have developed comprehensive indicator sets that focus on broader issues that incorporate such factors as economic prosperity, social equity, and environmental quality to measure and sustain the quality of life for the citizens in the community. Respondents to our survey noted that the most common purposes for developing environmental indicator sets were to assess environmental conditions and trends, educate and raise awareness among the public, simplify and communicate complex issues, and track progress toward environmental goals (see fig. 3).[Footnote 7] Figure 3: Ten Purposes for the Development of Environmental Indicator Sets Most Frequently Cited by Survey Respondents: [See PDF for image] Note: Results out of a possible total of 42 responses. [End of figure] Environmental indicator sets have been developed to serve multiple purposes and audiences. For example, the H. John Heinz III Center for Science, Economics, and the Environment (Heinz Center) developed The State of the Nation's Ecosystems indicator set, published in 2002, to identify a succinct set of indicators to report on the ecological condition of the nation, identify data gaps, and provide information to a broad audience. The intended audience of the indicator set encompassed members of Congress, executive branch agencies, business executives, environmental advocacy groups, state and local officials, and the general public. Most environmental indicator sets are developed voluntarily. For example, the California Environmental Protection Agency (Cal/EPA) began developing the Environmental Protection Indicators for California (EPIC) in 2001 as part of the implementation plan for the agency's 2000 Strategic Vision document. Cal/EPA made a commitment to focus more on measurable environmental results in assessing the effectiveness of its environmental programs, and in making program adjustments to better meet the state's environmental protection goals. EPIC developed about 85 indicators based on categories that mirror the agency's areas of authority, and reported them in an April 2002 report. Similarly, Minnesota's Department of Natural Resources developed environmental indicators and targets for its Strategic Conservation Agenda. The department developed about 75 indicators in six performance areas to help the agency better define its priorities, communicate its progress, and manage for environmental results. Other environmental indicator sets are developed in response to legal mandates. For example, the state of Michigan publishes a biennial report as required under the Michigan Natural Resources and Protection Act. The publication, prepared jointly by the Michigan Department of Environmental Quality and the Department of Natural Resources, reports on the conditions and trends of the environment, such as land use and cover, mammal and fish populations, and ambient air pollutant levels. At the federal level, the National Park Service created the Natural Resource Challenge in 1999 in response to the direction of the National Parks Omnibus Management Act of 1998 to enhance national parks management by using the highest quality science and information, and to create a resource inventory and monitoring program to establish baseline conditions and long-term trends.[Footnote 8] The Natural Resource Challenge includes indicators--referred to as vital signs--to identify ecosystem health status and trends and to determine compliance with laws and regulations. For example, park managers have used vital signs, such as the concentration of air pollutants in precipitation and its effects on water quality, to detect potential problems and identify steps to restore ecological health of park resources. Environmental Indicator Set Users Generally Report Positive Impacts: The use of environmental indicator sets has resulted in a variety of positive impacts. A majority of users of environmental indicator sets told us that the sets are either useful or very useful for their needs, especially in (1) assessing environmental conditions and trends, (2) communicating complex environmental issues, and (3) developing strategic plans. However, largely because indicator sets themselves do not create change--instead policymakers employ the information when making decisions--it is difficult to measure the benefits that accrue from these impacts.[Footnote 9] The indicator sets we reviewed assess environmental and natural resource conditions and trends, and have been used to help identify data gaps and research needs, provide early warning of potential environmental problems, allocate resources, and analyze alternatives for environmental management. Several of the applications help demonstrate how environmental indicator sets had positive impacts: * Experts called the Great Lakes State of the Lakes Ecosystem Conference (SOLEC) indicator set a key factor in identifying needed management approaches at the and served as a positive catalyst in promoting collaboration on key issues. In particular, the SOLEC indicator set helped influence the Fish and Wildlife Service decision to focus on the development of an ecosystem/watershed approach to environmental management for the Great Lakes that crosses multiple political boundaries. * The ecological framework designed for the Heinz Center's State of the Nation's Ecosystems indicator set is used to inform the design of the ecological portion of the international Global Ocean Observing System- -a major multinational initiative that is designed to observe, model, and analyze marine resources. In addition, the Heinz Center indicator set identified a number of missing or inadequate data needed to provide a complete picture of ecosystem condition, such as data to support an indicator measuring the biological condition of the soil in use as farmland. The center is working with federal, state, and local governmental, nongovernmental, and private organizations to call attention to the need for identifying priorities for filling data gaps and the need to fill these gaps. * The National Coastal Assessment component of EPA's Environmental Monitoring and Assessment Program (EMAP) provides a more complete picture of the condition of the nation's estuaries. EPA's Office of Research and Development led the creation of the indicator set and monitoring program that constitute the assessment, which includes five aggregate indicators--water quality, sediment quality, coastal habitat, benthic community structure, and fish tissue contaminants. Three coastal states have fully implemented the monitoring and indicator approach to fulfill reporting requirements under the Clean Water Act,[Footnote 10] and 21 other states have begun to implement the approach or have used the approach to assess a part of their estuaries. Users reported that the indicator set and monitoring design provided a more effective approach to consistently measuring estuary conditions for coastal states. * The development and use of an environmental indicator set for the Chesapeake Bay influenced the strategic allocation of approximately $18 million of federal funds in fiscal year 2003 toward meeting restoration goals for the bay. The Chesapeake Bay Program--established by the 1983 Chesapeake Bay Agreement, one of three overriding agreements aimed at restoring the health of the bay--began developing environmental indicators to support goal setting, to define targets and endpoints for restoration of the bay, and to make the program more accountable to the public by defining and communicating the bottom line environmental results achieved by the restoration program. The program distributes funds in the form of grants to state governments, local governments, interstate agencies, nonprofits, universities, and colleges to implement the restoration goals of the Chesapeake 2000 Agreement and to collect data and other information for use in the indicator set. The indicator set uses monitoring data and other information to measure environmental conditions of the Chesapeake Bay and progress in meeting goals. Environmental indicator sets also serve as powerful tools for communicating information on complex environmental issues in a way that makes them more comprehensible and accessible. Two organizations in particular--the Pacific Institute for Studies in Development, Environment, and Security through its West Oakland Environmental Indicators Project and the Jacksonville Community Council Inc. (JCCI)- -use their respective indicator sets to identify environmental issues, perform research to better understand the issues, and develop appropriate solutions. For example, West Oakland's indicator set helped decision makers identify and eventually close a major source of air pollution in the community, which likely would not have been accomplished without extensive public awareness and action galvanized by the indicator set. Similarly, JCCI uses its indicator set to identify issues for further study, such as ensuring an adequate water supply and reducing the municipal garbage burden, which the indicator set had shown to be areas of existing or emerging problems. At the culmination of each study, JCCI issues a report with recommendations to improve the situation and creates a task force to ensure implementation of the recommendations.[Footnote 11] The process of developing an environmental indicator set enhances strategic planning by engaging a broad-based group of individuals in a structured, collaborative process. As we reported in March 2004, strategic planning for performance-based, results-oriented management requires transforming organizational cultures to improve decision making, maximize performance, and ensure accountability.[Footnote 12] Such a transformation requires investments of time and resources as well as sustained leadership, commitment, and attention. Throughout our review, indicator set developers and users emphasized the importance of broad collaboration in developing indicators as a way of strengthening their relevance and broad acceptance. The developers of some indicator sets use the indicator set development process to advance dialogue within their community or region by bringing together many different sectors, fostering new alliances and relationships, and providing a forum to discuss ways to better measure and manage environment issues. For instance, staff members of some organizations, such as the Minnesota Department of Natural Resources and the California Environmental Protection Agency, told us that the process of developing and refining their indicator sets helped staff identify and define environmental management goals to better manage for results. For instance, California EPA has traditionally assessed the success of its environmental programs based on measures of activities, such as the number of permits granted or notices of violations issued. The intent of developing environmental indicators at California EPA was to measure environmental results and to be able to use the indicators to support a results-based management system. The process of developing the indicators at California EPA brought various staff together to define issues and parameters to develop indicators that could be used to manage for results. Nevertheless, it is not easy--or sometimes even possible--to measure the benefits of the sets that stem from these impacts. Developers reported that systematic monitoring of the effectiveness of environmental indicator sets and their benefits varies due in part to resource costs. Moreover, developers and users reported that environmental indicator sets themselves did not create change from which benefits could be measured; rather, they might influence environmental management activities and thus yield benefits from affecting the quality of a decision. However, such difficulties should not necessarily be seen as a precondition for developing and using indicator sets. Instead, these unanswered questions highlight the need for additional research on how to better gauge the return on the investment for organizations that have invested in indicator sets. Major Challenges Facing the Development and Use of Environmental Indicator Sets: A number of challenges face developers and users of environmental indicator sets. Selecting from a broad range of issues, survey respondents most frequently cited the 10 issues presented in table 2 as major or moderate challenges. Table 2: Ten Challenges Most Frequently Cited as Major or Moderate by Survey Respondents: Challenge: Obtaining data of sufficient quality; Number of responses: Major: 22; Number of responses: Moderate: 14; Number of responses: Total: 36. Challenge: Obtaining data of appropriate geographic scope; Number of responses: Major: 19; Number of responses: Moderate: 13; Number of responses: Total: 32. Challenge: Selecting sufficient indicators; Number of responses: Major: 15; Number of responses: Moderate: 16; Number of responses: Total: 31. Challenge: Obtaining needed funds; Number of responses: Major: 15; Number of responses: Moderate: 13; Number of responses: Total: 28. Challenge: Clearly defining the phenomena to be measured; Number of responses: Major: 12; Number of responses: Moderate: 12; Number of responses: Total: 24. Challenge: Determining the criteria for selecting indicators; Number of responses: Major: 5; Number of responses: Moderate: 18; Number of responses: Total: 23. Challenge: Staff with necessary expertise; Number of responses: Major: 7; Number of responses: Moderate: 15; Number of responses: Total: 22. Challenge: Clearly defining the purpose of the indicator set; Number of responses: Major: 4; Number of responses: Moderate: 18; Number of responses: Total: 22. Challenge: Clearly defining the intended use of the set; Number of responses: Major: 4; Number of responses: Moderate: 17; Number of responses: Total: 21. Challenge: Determining the conceptual framework to use; Number of responses: Major: 5; Number of responses: Moderate: 15; Number of responses: Total: 20. Source: GAO. Note: Results out of a possible total of 42 responses. Respondents chose from five response categories: Major, Moderate, Minor, Not a Challenge, or Don't Know. [End of table] Interviews with indicator set developers and other experts revealed that many challenges tended to revolve around the specific circumstances affecting the particular sets. However, we identified several categories of common challenges faced by indicator set developers and users on the basis of the survey responses and detailed interviews with developers and other experts: * Ensuring that a sound process is used to develop the indicator sets. Developers reported that support for an indicator set can be undermined if it is viewed as biased because of its association with a particular political perspective or leader. The process of developing an indicator set can be an intensely political process that challenges both the credibility and relevance of a set. Developers of the sets we reviewed largely relied on collaborative processes to define the purpose and intended use of the indicator set, determine the conceptual model and criteria for select indicators, and selecting the indicators themselves. Such processes are difficult to manage to ensure a set's credibility and relevance. * Obtaining sufficient environmental data to report conditions and trends related to the indicators selected. Over half of the respondents to our survey identified obtaining data of sufficient quality as a major challenge to developing indicator sets. Indicator set developers and other experts noted that the many different organizations that collect data on the nation's environment and natural resources do so for specific purposes in different forms or on different geographic scales. * Coordinating and integrating various related indicator sets in order to obtain a better understanding of the environment. Experts that we interviewed noted the federal government lacks an organizational framework to provide a consistent basis for working with international, state, or nongovernmental indicator initiatives. Environmental indicator set developers employ a wide range of approaches. As a result, significant analytical and technical differences inhibit integration of related sets or synthesis of the diverse range of sets to draw a comprehensive picture of the nation's environment. The White House Council on Environmental Quality (CEQ) recognized the need for coordination and established an Interagency Working Group on Indicator Coordination (Working Group) in 2002 to coordinate and integrate the federal investment in environmental indicator sets. * Linking specific environmental management actions and program activities to changes in environmental conditions and trends. Organizations that develop environmental indicator sets to improve the performance of environmental management programs can struggle with linking management actions and environmental conditions and trends and address the time lag between management actions and achieved results. EPA's past efforts to develop and use environmental indicators underscore both the importance and difficulty of doing so, and the need for a focused, long-term commitment as the agency undergoes changes in management and priorities. Ensuring a Sound Process to Develop Indicator Sets: Developers reported that support for an indicator set can be undermined if it is viewed as biased because of its association with a particular political perspective or leader. The process of developing an indicator set can be an intensely political process that challenges both the credibility and relevance of a set. When selecting one of the many indicators in a set, others are necessarily excluded because many indicator set developers strive to keep the number of indicators as small as possible. In some cases, that means an issue of interest to a particular stakeholder or user group does not get measured by the set. For example, the criteria used to select indicators for the Georgia Basin Puget Sound Ecosystem indicator set limited the number of indicators to only six, which led to gaps in the presentation of information on the complete state of the ecosystem. The process used to select indicators can affect the usefulness of a set, producing a set of indicators of little or no relevance to the users' needs. Moreover, developers reported that support for an indicator set can be undermined if it is viewed as being biased or nonobjective because of its association with a particular political perspective or leader. Indicator set developers stressed the need for a balanced process to manage such concerns. In particular, involving a set's varied users, developing and applying sound selection criteria, and identifying appropriate conceptual models were cited as important elements of the development process. Many developers we interviewed noted the importance of--and difficulties in--incorporating users' needs when selecting indicators. Identifying, engaging, and balancing the information needs of the users can be a resource-intensive processes. For example, the Heinz Center spent significant time conducting outreach to each of four sectors-- businesses; environmental and conservation advocacy organizations; academia; and federal, state, and local governments--it identified as potential users of its indicator set on The State of the Nation's Ecosystems. The Heinz Center engaged about 150 representatives in a 3- year consensus-building process, leading to the indicator set that was eventually adopted. Similarly, the Minnesota Department of Natural Resource's indicators supporting the state's Strategic Conservation Agenda were developed collaboratively by the department's Science Policy Unit--housed within the department's Office of Management and Budget Services--and departmental operations managers representing all divisions and regions. Developers stated that the process, although resource-intensive, ensured that the agency had support from users and other stakeholders of the indicator set. However, not all indicator sets have the resources to develop such a process or sustain it over time. As a result, indicator sets can have limited applicability to the users' needs. We found that some affected user groups were not identified, not effectively involved in the development of indicator sets, or both. In many of the cases we reviewed, indicator set developers employed specific criteria to guide indicator selection. Such criteria describe desired characteristics, attributes, or standards--such as relevance to environmental policies or scientific soundness--that indicators must meet to be eligible for inclusion in a set (see table 3). Table 3: Ten Criteria Used to Select Indicators Most Frequently Cited by Survey Respondents: Criteria: Measurable; Number of responses: 35. Criteria: Relevant; Number of responses: 35. Criteria: Appropriate geographic scale; Number of responses: 34. Criteria: Understandable; Number of responses: 34. Criteria: Data available; Number of responses: 32. Criteria: Data quality; Number of responses: 31. Criteria: Importance; Number of responses: 28. Criteria: Appropriate temporal scale; Number of responses: 28. Criteria: Data comparability; Number of responses: 26. Criteria: Trend data available; Number of responses: 24. Source: GAO. Note: Results out of a possible total of 42 responses. [End of table] In some cases, set developers engage users and other stakeholders in defining selection criteria early in the selection process to screen, rank, or otherwise prioritize the field of potential indicators before addressing and selecting the individual indicators. For example, the process for selecting indicators for the Environmental Protection Indicators for California indicator set involved developers first identifying environmental issues that are significant for the state-- such as air quality or human health--along with more specific components of such issues--such as criteria air pollutants such as ozone, carbon monoxide, and particulate matter. Developers then identified relevant, measurable parameters within each issue, such as vehicle miles traveled, to help derive candidate indicators. Candidate indicators were then subject to criteria, such as data quality, representativeness, sensitivity, and decision support to help select the final set of indicators. In addition, many indicator set developers designed conceptual models to serve as foundations for structuring and selecting indicator sets. Conceptual models present the set developers' understanding of how systems operate, and help integrate the different fields of science relevant to an issue that cuts across environmental disciplines, such as ecosystem management. Such models can enhance the degree to which an indicator set incorporates the best available scientific knowledge and understanding, presents assumed causal relationships between different variables, and identifies different types of performance management indicators for assessing the results of specific environmental policies. For example, one common model is the pressure-state-response model. Such a model helps developers understand real and potential causal relationships between human actions, such as population growth and pollution, on the environment. Obtaining Sufficient Environmental Data to Report Conditions and Trends: Obtaining data for use in indicator sets can be difficult largely because longstanding limitations of federal environmental monitoring and data collection activities have not been resolved. Over half of the respondents to our survey identified obtaining data of sufficient quality as a major challenge to developing indicator sets. Indicator set developers and other experts noted that the many different organizations that collect data on the nation's environment and natural resources do so for specific purposes. To meet these purposes, these data are collected in different forms or on different geographic scales, and thus cannot be readily integrated to support indicators. Such limitations of federal environmental monitoring and data collection activities, however, are long-standing and, despite a number of attempts, have not been resolved. Responsibility for research, monitoring, and assessment of various environmental and natural resources currently resides in various federal and other organizations whose activities focus on achieving specific programmatic objectives. Differences in definitions, study design and methodology; frequency of collection; site selection; quality assessment and control; and other technical issues compound the fragmentation of data collection activities. For example, our January 2001 report detailed major management issues facing EPA, one of which was the agency's outmoded data management system that relies on separately designed, media-specific databases that are generally not technically compatible.[Footnote 13] Data generated through such disparate activities are not being integrated in common databases or otherwise being made accessible to potential users. Data sharing can have significant costs because environmental data are generally collected according to the specific needs or purposes of the collecting agency or organization, and transforming the data to suit the needs of another user would require data managers to divert already limited resources--staff time, computing resources, and money--from ongoing agency projects. The recent commitment to develop a Global Earth Observation System of Systems (GEOSS) by the United States underscores the need for coordinated information about the environment. GEOSS is a 10-year international cooperative effort to make it possible for all existing and new earth-observing hardware and software around the globe to communicate so they can continuously monitor the land, sea, and air. GEOSS is built on the idea that the dozens of observational systems now generating reams of data around the world could be more powerful if they could be combined and widely disseminated. A completed 10-year implementation plan will be presented at the third Earth Observation Summit in February 2005. More than 15 federal agencies--including NOAA and EPA--and several White House offices are developing a draft strategic plan for the United States Integrated Earth Observation System, which will be a key component of the GEOSS 10-year plan. Moreover, gaps in existing data also limit the usefulness of many federal environmental datasets to support the crosscutting issues addressed by indicator sets. Our past work has emphasized that the federal government's current environmental information base suffers from data gaps between what is monitored and what needs to be monitored. For example, we reported in July 1998 and again in December 2002 on how the lack of consistent data on federal wetlands programs implemented by different agencies prevented the government from measuring progress toward achieving the governmentwide goal of no net loss of the nation's wetlands.[Footnote 14] Furthermore, we reported in June 2004 that hundreds of entities across the nation collect water quality data that provide a great deal of information about the condition of the nation's waters--however, the United States does not have enough information to provide a comprehensive picture at the national level because of the way in which these entities collect water quality data.[Footnote 15] This shortfall impairs its understanding of the state of its waters and complicates decision making on such critical issues as which waters should be targeted for cleanup and how such cleanups can best be achieved. Problems with integrating databases and filling gaps in federal environmental data are long-standing issues that were recognized at least 3 decades ago. In 1970, the Council on Environmental Quality (CEQ) noted in its first report to Congress on the nation's environment that contemporary efforts did not provide the type of information or the geographic coverage needed to evaluate the condition of the nation's environment, track changes in its quality, or trace their causes.[Footnote 16] Moreover, academicians have found that nearly every comprehensive study during this period on national environmental protection has called for more coherent and comprehensive information on the state of our environment and natural resources.[Footnote 17] Congress has discussed federal environmental data and indicator issues many times since 1970. Figure 4 shows these efforts, as well as selected relevant scholarly reports issued during the same period. Figure 4: Selected Activities Identifying Need for More Comprehensive Environmental Information: [See PDF for image] Note: Refer to appendix IV for a description of legislation, hearings, and reports. [End of figure] Although not intended to be exhaustive, this figure illustrates significant legislative and academic milestones in federal environmental data and indicator management over the last 35 years. As shown in the figure, both Congress and the academic community had already identified and analyzed, but not addressed, many of the fundamental issues confronting indicator development and data management by the close of the 1970s. Because of problems filling gaps in existing data and difficulties in integrating data from different databases, indicator set developers' efforts to identify data of sufficient quality from existing data sources has met with limited success. For example, the developers of the Heinz Center's State of the Nation's Ecosystems report were unable to obtain sufficient data for reporting nationally 45 of 103 indicators included in the report. The report identified Total Impervious Area--a classification of urban and suburban areas according to the percentage of roads, parking lots, driveways, and rooftops that they contain--as an important measure of the degree of urbanization of the United States, and closely related to water quality in urban and suburban areas. However, the report explained that such data had not been compiled regionally or nationally and there were no standard methods for estimating this metric.[Footnote 18] As illustrated in table 4, other national indicator sets experienced a similar challenge. Table 4: Sufficiency of Current Environmental Data to Support Three Major National Indicator Sets: Indicator set: The State of the Nation's Ecosystems; Number of indicators: 103; Indicators with sufficient data: 58 (56%); Indicators with insufficient data: 45 (44%). Indicator set: Draft Report on the Environment 2003; Number of indicators: 146; Indicators with sufficient data: 44 (30%); Indicators with insufficient data: 102 (70%). Indicator set: National Report on Sustainable Forests--2003; Number of indicators: 67; Indicators with sufficient data: 8 (12%); Indicators with insufficient data: 59 (88%). Sources: EPA, Forest Service, and the Heinz Center. Note: GAO applied the various quality criteria developed and reported by each project. GAO did not independently evaluate these criteria or the project's application of the criteria. [End of table] Coordinating and Integrating Indicator Sets to Improve the Current Understanding of Environmental Conditions and Trends: Experts we interviewed noted the federal government lacks an organizational framework or institutional arrangements to provide a consistent basis for working with international, state, or nongovernmental indicator initiatives. Currently these efforts are not coordinated, resulting in significant differences and incompatibilities between sets that inhibit integration and synthesis. For example, federal environmental indicator sets cannot always be integrated with each other, or with regional-or state-level indicator initiatives on similar topics, largely because the sets are based on different frameworks and include indicators relevant at different geographic scales. As a result, congressional, federal agency, and other users must reconcile information that seems to deliver inconsistent or conflicting messages. For example, both the Forest Service's National Report on Sustainable Forests--2003 and the Heinz Center's State of the Nation's Ecosystems include an indicator related to species rarity: the status (threatened, rare, vulnerable, endangered, or extinct) of forest-dependent species at risk of not maintaining viable breeding populations, as determined by legislation or scientific assessment and at-risk native forest species, respectively. However, though the datasets appear to be similar, the data in each set are presented in different ways and could appear confusing--even contradictory--to a reader unfamiliar with the different risk classification schemes used. Moreover, even as federal activity developing indicator sets is increasing, developers at the various agencies may be missing opportunities to share knowledge and transfer experience. Federal developers have little to no access to best practices and lessons learned through others' experience with indicator sets needed to optimize the federal investment in this activity. Despite the extensive federal involvement in developing environmental indicators over the past decade, no clearinghouse has been established for collecting, classifying, and distributing information on best practices and lessons learned, either within or outside of the federal government. Experts involved in our meeting on environmental indicator sets said that such a clearinghouse could help developers avoid the sometimes duplicative time and resources currently devoted to identifying the elements of effective indicator sets. Several federal agencies have acknowledged the need for such and have begun taking initial actions to address this need. For example, the Forest Service's Northeastern Area State and Private Forestry unit recently developed a sourcebook and an Internet- based clearinghouse to disseminate information for states and other organizations to use when attempting to use indicators for assessing forest sustainability. Recognizing the need for improved coordination at the highest federal levels, the Interagency Working Group on Indicator Coordination was created at the request of the Chairman of CEQ in a December 31, 2002, memo. One purpose of the National Environmental Policy Act (NEPA) is to enrich the understanding of ecological systems and natural resources important to the nation.[Footnote 19] The act requires that CEQ review and appraise federal programs and activities to determine the extent to which these activities are achieving the purposes of NEPA and to make appropriate recommendations to the President. In addition, NEPA requires CEQ to document and define changes and trends in the natural environment, and accumulate the necessary data and other information for a continuing analysis of such changes and trends and an interpretation of their underlying causes. The Interagency Working Group on Indicator Coordination is composed of representatives from the Departments of Agriculture, Commerce, Defense, Health and Human Services, the Interior, and Transportation, as well as EPA and the White House Offices of the Federal Environmental Executive, Management and Budget, and Science and Technology Policy. The Working Group first met in March 2003 to consider ways to enhance the nation's capacity to regularly report on natural and environmental resources, as well as related health, social, and economic factors, using a comprehensive set of indicators. It is currently considering a National System of Indicators on Natural and Environmental Resources, and is studying ways to improve institutional arrangements among the federal agencies for statistical reporting of such indicators. The Working Group has developed an approach and policy framework for developing a national indicator system by building on existing federal and nonfederal efforts and has agreed that the system is a long-term goal. Furthermore, the Integration and Synthesis Group, an effort to coordinate several key federal "building block" indicator sets[Footnote 20] under the leadership of the Working Group, has begun to develop a systems-based framework to organize environmental and natural resource indicators and provide a strong theoretical foundation for future integration work. The Working Group has also agreed on a general conceptual framework to guide the selection and use of indicators and is working to reach agreement on a detailed architecture to guide the management and use of data and information technology resources, and institutional arrangements to develop and operate a national system of indicators. Officials of agencies participating in the Working Group acknowledge the need for a more stable structure with the authority and resources necessary to achieve the Working Group's goals. In this regard, as an ad hoc organization within the Executive Office of the President, the CEQ Working Group lacks a stable institutional arrangement with explicit responsibility and authority to ensure the continued and full involvement, cooperation, and resources from other federal agencies. Experts participating in our two-day meeting on environmental indicator sets hosted by the National Academy of Sciences--including officials from CEQ, EPA, NOAA, the U.S. Geological Survey, and the Forest Service (within the Department of Agriculture)--discussed a number of different structures that could be employed to create a lead organization responsible for coordinating and integrating environmental indicator sets. Specifically, they discussed models ranging from using an executive order to build upon existing activity to creating a new quasi-governmental organization with the authority to oversee the development of a national environmental indicator system. In particular, the experts emphasized the importance of credibility and transparency as keys to the success of such an endeavor, in addition to authorities for addressing the widespread challenges of developing coordinated federal environmental indicator sets and ensuring the continued and full involvement, cooperation, and resources of the federal agencies. The experts did not settle on any particular approach, but instead noted that all of the options available should be studied to determine which option or combination of options is most appropriate. Furthermore, they generally agreed that whatever institutional arrangements are developed should be capable of performing the following functions: * designing an information architecture using the best available information technology; * providing leadership, vision, and overall scope; * providing guidance and coordination with regard to environmental indicator development and use; * assisting in environmental indicator selection, development, improvements, and evaluation; * designing and managing data collection and monitoring, including consolidation and prioritization (identifying potential data sources, identifying areas where no data exist, and establishing ways to fill data gaps to support environmental indicators); * organizing statistical compilation and reporting (connecting data to environmental indicator sets); * identifying environmental research and development focus areas-- including environmental indicator methods--and developing and investigating conceptual frameworks, statistical methods, interpretation, assessment, diagnosis, and basis for interpretation; * interpreting environmental indicators for planning, policy, management, and communication purposes; and: * conducting audience analysis and public engagement to understand what information is needed to support outside entities. Linking Environmental Management Actions and Program Activities to Changes in Environmental Conditions and Trends: Environmental indicator sets are developed for many purposes, including tracking progress toward environmental goals and program performance. However, organizations that develop environmental indicator sets to improve the performance of environmental management programs can encounter challenges that inhibit the use of indicator sets in this context. Specifically, organizations encounter problems accounting for (1) causal relationships between management actions and other factors beyond the agency's control that can potentially affect environmental changes and (2) the delay between management actions and achieved results. Because complex webs of variables interact to determine ecological and human health outcomes, the role of a particular program in shaping environmental or natural resource conditions cannot always be determined. Organizations sometimes rely on indicator sets as diagnostic tools to highlight problem areas requiring further study, rather than as direct measures of performance, because indicator sets generally demonstrate a correlative--rather than causal--relationship between specific policies or programs and environmental conditions. Moreover, management actions can take many years to yield environmental results. A developer reported concern that the conditions and trends measured in their indicator sets would be used to determine funding allocations without regard to the long-term nature of environmental programs. Such problems are consistent with our work on performance measurement in general. We reported in a June 1997 report on GPRA that the limited or indirect influence that the federal government sometimes has in determining whether a desired result is achieved complicates the effort to identify and measure the discrete contribution of the federal initiative to a specific program result.[Footnote 21] Our March 2004 review of GPRA explained that this impediment occurs primarily because many federal programs' objectives are the result of complex systems or phenomena outside the program's control. In such cases, it is particularly challenging for agencies to confidently attribute changes in outcomes to their program--the central task of program impact evaluation.[Footnote 22] Our January 2001 report on management challenges at EPA noted that environmental programs may not yield measurable results for many years into the future.[Footnote 23] However, our prior work also discussed best practices for addressing challenges to measuring the results of such programs. In particular, to address the challenge of discerning the impact of a federal program, when other factors also affect results, we suggested agencies establish a rationale of how the program delivers results. Establishing such a rationale involves three related practices: (1) taking a holistic or "systems" approach to the problem being addressed, (2) building a program logic model that described how activities translated to outcomes, and (3) expanding program assessments and evaluations to validate the model linkages and rationale. EPA's recent attempts to develop a set of environmental indicators illustrate the difficulties in linking management actions with the environmental results of such actions.[Footnote 24] In November 2001, at the direction of its Administrator, EPA embarked on a major effort- -called the Environmental Indicators Initiative--to develop an assessment of the nation's environmental conditions and trends to enhance the agency's efforts to manage for environmental results, and to identify data gaps and the research and information collection efforts needed to fill those gaps. EPA's long-term goal for the initiative was to improve the data and indicators that are being used to guide its strategic plans, priorities, performance reports, and policy and management decisions.[Footnote 25] EPA's initiative, which resulted in the publication of its Draft Report on the Environment 2003, seeks to provide a coherent picture of the nation's environment. This initiative is a major step toward developing indicators to provide a better understanding of the status and trends in human health and environmental conditions, as well as the more traditional measures of air, water, and land conditions. While EPA's two independent science advisory organizations--the Science Advisory Board and the National Advisory Council for Environmental Policy and Technology--have identified data limitations and other problems with the draft report, they commended EPA for its efforts and strongly recommended that EPA finalize the report after making needed revisions and improvements. According to EPA, work on EPA's next Report on the Environment-- scheduled for release in the summer of 2006--is currently under way. The next report will continue the efforts to develop a more comprehensive set of environmental indicators that could be used for a variety of purposes. EPA plans to include a set of regional environmental indicators in the next report that enhances the comprehensiveness of the indicators at multiple geographic scales. EPA is also working to integrate environmental information into a variety of planning processes. For example, the Office of Environmental Information and the Office of the Chief Financial Officer are currently working to link the forthcoming Report on the Environment 2006 to the agency's strategic planning effort. EPA's recent actions represent noteworthy progress, but the agency still has considerable distance to travel and important challenges to overcome in developing a set of national environmental indicators useful for establishing priorities, allocating resources, and assessing environmental results. Since our 1998 report on EPA's management, GAO has stressed numerous times that EPA place priority on developing indicators to manage for results. In this regard, the few outcome measures in EPA's collection of performance metrics is largely a reflection that scientific knowledge essential to permit outcome measurement is often lacking, and that significant time lags often exist between actions taken to protect and improve the environment and demonstrable effects. In the absence of measures to detect and assess changes in the environment that could be supported with data, it becomes a matter of judgment as to how efficiently and effectively EPA is using its resources to address the nation's environmental problems. Even with the agency's recent progress toward developing better outcome measures, EPA continues to face substantial challenges in understanding and describing the complex relationships among its programs, specific environmental pollutants, and human health and ecological conditions. EPA plans to continue developing and refining its indicator set as it seeks to clarify more fully the linkages between environmental pollution and other factors with human health and ecological conditions. To do so, it must continue to work to obtain credible and reliable environmental data from its own and other federal and nonfederal databases to support the indicators framework laid out in the Draft Report on the Environment. This task will involve continued collaborative effort with other federal, state, and tribal agencies. As we reported in January 2003,[Footnote 26] EPA's progress in managing for results, particularly in describing current conditions and trends and identifying and filling research and data gaps, hinges on its efforts to translate its vision into specific actions. Such actions include establishing target dates for meeting specific milestones, identifying and obtaining sufficient staff and financial resources, and developing a structured approach for establishing direction, setting priorities, and measuring performance. Identifying and implementing specific actions aimed at better managing for results by developing and using environmental measures in planning, budgeting, and evaluating results continues to be difficult for EPA. The agency's earliest attempts to do so date back to 1974 and, in 1990, the agency made measuring changes in environmental conditions and trends a major policy and operational focus for the agency. These previous efforts to develop and use environmental indicators illustrate both the importance and difficulty of doing so, and the need for a focused, long-term commitment as changes occur in the agency's senior management and priorities. The previous EPA efforts have been hindered not only by technical difficulties in establishing linkages between program activities and changes in the environment, but also by changes in leadership within the agency and the lack of needed resources for monitoring environmental conditions. Monitoring activities have had trouble in competing for limited resources with EPA's regulatory programs and activities. Recently, the Administrator of EPA has endorsed the continuation of the agency's indicators initiative in principle, and EPA has included the initiative as a performance measure in its annual performance plan for data quality activities. In addition, two of EPA's external scientific advisory organizations--the Science Advisory Board and the National Advisory Council for Environmental Policy and Technology--have lauded EPA's efforts thus far. Nonetheless, thus far the initiative--managed by EPA's Office of Information and Office of Research and Development- -is not a key component of an agencywide comprehensive approach for identifying priorities, focusing resources on the areas of greatest concern, and managing EPA's work to achieve measurable results. For example, EPA has not initiated or planned an institutional framework with clear lines of responsibility and accountability among its various program offices and other organizational components for developing and using environmental indicators. Consequently, EPA has no systematic means to ensure that its efforts to identify environmental conditions and trends are used to inform priorities, strategic plans, allocation of resources, and agency reporting systems to establish accountability for EPA's efforts and determine whether programs and activities are having desired results, or need to be modified to better address the agency's priorities. Conclusions: Despite decades of activity and billions of dollars of investment, the nation is not yet capable of producing a comprehensive picture of environmental or natural resource conditions or trends. Federal and nonfederal organizations are developing and using environmental indicator sets to identify data gaps and bridge the gap between needed and available information. Despite several significant challenges, users of the indicator sets that we reviewed reported positive impacts in enhancing strategic planning efforts, communicating complex environmental issues, and tracking progress toward environmental goals. However, it is difficult to determine the benefits that arise from these impacts because environmental indicator sets themselves do not create change from which benefits can be measured. Rather, indicator sets might influence environmental management activities and thus yield benefits from affecting the quality of a decision. Much research remains to be done on how to better gauge returns on the investment made by organizations that have developed indicator sets. Nevertheless, the picture of the nation's environmental conditions and trends remains incomplete, as indicator set developers struggle to obtain sufficient data and coordinate their efforts with those of other set developers. Federal agencies moving toward developing sets of environmental indicators face several major common challenges. These challenges include selecting the most appropriate indicators and sustaining a balanced process over time, linking the environmental outcomes represented by the indicators to steer specific environmental programs, enhancing the compatibility and coverage of environmental data, and overcoming obstacles to coordinating and integrating indicator sets to develop a comprehensive picture of the state of the nation's environment and natural resources. The refinement and usefulness of future sets of environmental indicators will largely depend on the extent to which these common challenges are resolved. Nonetheless, there is no entity with the authority, responsibilities, and resources to bring a concerted, focused, and systematic approach to addressing these common challenges and move toward a more fully systematic and integrated approach to developing federal sets of environmental indicators. Individual federal organizations may be missing opportunities to improve the quality of their indicator sets by not integrating their work with other similar efforts. Moreover, independently developing sets of indicators runs the risk of introducing increased possibilities of duplicating the activities of others. Recognizing the need for a more coordinated approach to the federal investment in developing environmental indicator sets, CEQ's Interagency Working Group on Indicator Coordination is beginning to address challenges in developing environmental indicators sets. The Working Group is focused on developing institutional arrangements to provide the capacity and collaboration needed to produce and publish the indicator information, guide the selection and development of indicators and the organization of data for effective access and use, and develop processes for the coordination and integration of ongoing federal indicator development projects. However, the Working Group does not have a stable institutional arrangement with explicit responsibility and authority to ensure the continued and full involvement, cooperation, and resources from other federal agencies. Participants in our expert meeting convened by the National Academy of Sciences generally believed that the specific institutional arrangements utilized to coordinate and integrate federal environmental indicator projects should be carefully considered to ensure credibility of the outputs, both inside and outside the federal government. Moreover, they noted that specific key functions should be addressed, such as providing guidance for developing and using environmental indicators, designing an information architecture using the best available information technology, identifying the most crucial areas requiring environmental research, and assisting in environmental indicator selection, development, improvements, and evaluation. We have long encouraged EPA to develop environmental indicators as a means to establish priorities, allocate resources, assess progress, and, in general, manage for environmental results. While we believe that EPA's Environmental Indicators Initiative and Draft Report on the Environment are a much-needed step in the right direction, this is not the first time the agency has tried to develop such environmental measures. The agency's successive efforts to develop and use environmental indicators since 1974 illustrate both the importance and difficulty of doing so and emphasize the need for dedicated, long-term commitment as changes occur in the agency's senior management and priorities. Given the complexity of the effort, a strong commitment to an institutional framework for developing and using indicators that emphasizes a systematic approach--including clear lines of responsibility and accountability among program offices and other organizational components and specific expectations, schedules, milestones, and resources--would better enable the agency's management to ensure that indicators of environmental conditions and trends are incorporated into EPA's efforts to plan strategically, allocate resources, and assess progress toward meeting environmental goals and objectives. Recommendations for Executive Action: To provide a comprehensive picture of environmental and natural resource conditions and trends to assess the nation's position and progress, we recommend that the Chairman of CEQ develop institutional arrangements needed to ensure a concerted, systematic, and stable approach to address the challenges associated with the development, coordination, and integration of environmental indicator sets. Such arrangements should be capable--either separately or jointly--of assisting in the development, selection, evaluation, and refinement of a national system of environmental indicators. The arrangements should provide for the coordination of federal data collection, monitoring, and statistical compilation activities, including consolidation and prioritization of data gaps, to support environmental indicators. Arrangements should also be capable of guiding and coordinating environmental indicator development and use, including creating a clearinghouse for best practices and lessons learned. The Chairman's strategy should incorporate the best available information technology to develop an information architecture for collecting, maintaining, and distributing environmental information. Moreover, the Chairman should provide for methods to identify environmental research and development focus areas. Finally, the system of arrangements should be designed to ensure the authority and credibility of its outputs. Building on EPA's initial efforts on indicators and to evaluate the purposes that indicators might serve, we recommend that the EPA Administrator establish clear lines of responsibility and accountability among EPA's various organizational components and identify specific milestones, resources, and other requirements for developing and using environmental indicators to inform the agency's strategic systems for planning, budgeting, and reporting on progress. Agency Comments and Our Evaluation: We provided a draft of this report for review and comment to CEQ, the Departments of Agriculture and the Interior, EPA, and NOAA, all of which provided comment. Each of the agencies generally agreed with the report's findings and recommendations. Additional agency comments included the following: * CEQ said that the report was a timely and comprehensive review of the many efforts underway, and that the report properly documents the many advancements and challenges recognized by experts. CEQ noted that the report should more clearly recognize that a comprehensive set of environmental indicators has the potential for benefiting environmental management governmentwide. We agree that environmental indicators stand to enhance management activities, such as strategic planning or resource allocation, across all federal agencies. Furthermore, CEQ commented that the report should make note of the Program Assessment Rating Tool, recently developed by the Office of Management and Budget, because it can enable both the executive and legislative branch of government to better understand program performance and identify opportunities for improvement. CEQ also noted that the report should make reference to the Global Earth Observation System of Systems--the international cooperative effort to bring together existing and new hardware and software to harmonize the supply data and information. We modified the report text as appropriate to incorporate these recent developments. * The Department of Agriculture noted that the report effectively recognizes the need for better coordination of environmental indicator development and reporting among federal and nonfederal entities. Some Agriculture reviewers believed that, while the report emphasizes EPA's efforts in this area, many other agencies have authorities and responsibilities regarding environmental indicators. Additionally, Agriculture's Economic Research Service thought the report would have benefited from additional emphasis on the importance of coordinating behavioral and environmental data. * The Department of the Interior noted that further efforts to identify institutional arrangements are essential given the unique characteristics and complex interrelationships among the range of agency programs noted in the report. * EPA expressed some concern that the report implied that the Draft Report on the Environment 2003 was not successful in achieving its goals. We do not believe that the report makes such an implication, and we did not attempt to evaluate the success of the report in meeting its goals. Rather, we focused on the persistent need for the agency to provide clear lines of responsibility and accountability for meeting the goals of the Environmental Indicators Initiative--which produced the 2003 report--one of which was to improve the agency's ability to manage for results. EPA noted that it is currently working to link the planned Report on the Environment 2006 to the agency's strategic planning effort, and investigating other opportunities to link environmental information to management reporting and accountability systems. We modified the report text to better reflect these activities. * NOAA questioned the practicality of coordinating the independent efforts of the many federal agencies currently collecting environmental monitoring data on coastal conditions. However, NOAA agreed that the report correctly characterizes the importance--as well as the difficulty--of doing so. Finally, CEQ, the departments, and EPA recommended a number of technical changes to the report, which we incorporated as appropriate. We are sending copies of this report to the Administrators of EPA and NOAA, the Chairman of the Council on Environmental Quality, the Secretaries of Agriculture and the Interior, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available free of charge via the GAO Web site at [Hyperlink, http://www.gao.gov]. Should you or your respective staffs have any questions about this report, please contact me at (202) 512-6225, or Ed Kratzer, Assistant Director, at (202) 512-6553. Key contributors to this report are listed in appendix XI. Signed by: John B. Stephenson: Director, Natural Resources and Environment: [End of section] Appendixes: Appendix I: Objectives, Scope, and Methodology: Specifically we were asked to report on the following questions: (1) How and for what purposes are federal and nonfederal organizations developing and using environmental indicator sets? And (2) What are the major challenges facing the development and use of environmental indicator sets? For the purpose of this review, we defined an "environmental indicator set" as a selected group of quantifiable variables that shows a significant condition or trend of the state of the environment and natural resources, or related human activity. Our review focused primarily on the development and use of sets of environmental indicators, rather than on any single indicator. Our review included sets organized around environmental conditions and trends, ecological health, environmental performance, sustainable development, and corporate environmental information. To meet our objectives, we performed multiple lines of work as detailed below, including reviewing literature on the development and use of environmental indicator sets; interviewing key experts from both the United States and abroad; developing a compendium of environmental indicator sets; surveying developers and users affiliated with 39 environmental indicator sets at the national, state, regional, and local levels; conducting in-depth case studies of 8 indicator sets at the national, state, regional, and local levels; and contracting with the National Academy of Sciences to convene a meeting of experts. In developing our findings, we compiled evidence from across our lines of work to corroborate and "triangulate" salient themes. However, we did not intend to exhaustively catalog the universe of environmental indicator sets. General references to indicator sets in this report refer to the 47 sets we reviewed in detail--the 8 case studies and the 39 sets represented in our survey results. Moreover, we did not evaluate the quality of data used in any of the indicator sets we reviewed, and we did not rely on these data for any of our findings. A thorough review of the data systems that support the indicator sets we reviewed was outside the scope of this project. Compendium of Environmental Indicator Sets: To identify a list of environmental indicator sets for review, we solicited input from experts in the field and asked them to identify indicator sets on four geographic scales--national, regional, state, and local. Forty-eight experts were selected from extensive Web and literature searches, background interviews, and contacts from professional conferences spanning our geographic scales. We distributed an electronic data collection instrument to each of the experts asking for information on environmental indicator sets with which they were familiar that either were being developed or had been developed in the past 10 years, states that have been or are currently leading the effort in developing and using environmental indicator sets, and a project contact person for each set. Twenty-three experts responded. We combined duplicate responses and eliminated responses that: (1) did not meet our definition of an indicator set or (2) could not be substantiated with enough information to locate a specific initiative. A pool of 87 environmental indicator sets was identified for review in detail (see app. II.) Survey of Practitioners: To develop a list of environmental indicator set developers and users- -which we called practitioners--to survey, we contacted the points of contact at the 87 indicator sets identified by the experts and asked them to provide us with a developer and a user to receive our survey. This process yielded 87 practitioners to be surveyed, representing 52 of the indicator sets. Forty-nine of the 87 practitioners responded to our survey for a 56 percent response rate. Table 5 provides summary information. The survey results are not necessarily representative of the entire population of environmental indicator set practitioners. Table 5: Summary of Survey Participants: Indicator sets: Indicator sets identified for survey; National: 17; Regional: 8; State: 14; Local: 13; Total: 52. Indicator sets represented by a completed survey; National: 15; Regional: 4; State: 7; Local: 13; Total: 39. Practitioners: Practitioners identified for survey; National: 28; Regional: 14; State: 23; Local: 22; Total: 87. Practitioners that responded to the survey; National: 20; Regional: 6; State: 9; Local: 14; Total: 49. Developer; National: 8; Regional: 3; State: 3; Local: 5; Total: 19. User; National: 2; Regional: 1; State: 3; Local: 1; Total: 7. Both; National: 10; Regional: 2; State: 3; Local: 8; Total: 23. Source: GAO. [End of table] We identified the areas to cover in the survey based on the assignment request, the Internet and literature searches, background interviews, and the professional conferences we had attended. The survey questions focused on the characteristics of the indicator set, how it was developed, and how the set is being used. We pretested the survey with two developers and two users. We evaluated the appropriateness and quality of the survey questions and responses and tested the usability of the Internet-based survey. Based on the pretest results, we made the necessary changes to the survey prior to its implementation. We administered the survey through the Internet. During our early efforts to determine whether we had accurate information on the survey population, we obtained their e-mail addresses. We used e-mail to inform the practitioners of the survey administration, and provided them with the Web link for the survey and their log-in name and password. To maximize the response rate, we sent an e-mail reminder and followed up by telephone to encourage survey participation. The survey was structured in two separate sections: one for developers to complete and the other for users to complete. At least one developer or user from 39 of the 52 indicator sets completed our survey. However, some respondents answered the survey in a capacity other than how we originally classified them. The survey results for some indicator sets are represented with answers from two individuals. Given that the purpose of the survey was to gather general descriptive information on indicator sets and how they are developed and used, we do not believe that the multiple responses for some indicator sets greatly influence the survey results. Our survey of developers and users of environmental indicator sets and a more complete tabulation of the survey results (GAO-05-56SP) will also be available on the GAO Web site at www.gao.gov/cgi-bin/getrpt?GAO-05-56SP. Case Study: To contribute to our understanding of the development and use of environmental indicator sets, we reviewed 8 environmental indicator sets in-depth through case study. We selected two indicator sets for case study review at each of four geographic scales--national, regional, state, and local--from the pool of 87 indicator sets identified by experts. The selection of case studies for review was based on the level of maturity of the indicator set (current and active) and the availability and accessibility of individuals involved in the development and use of the indicator set. Table 6 provides a breakdown of the environmental indicator sets selected and the geographic scale that each set represents. Table 6: Environmental Indicator Sets Selected for Case Study Review: Case study name: The Heinz Center's State of the Nation's Ecosystems; Geographic scale: National. Case study name: EPA's National Coastal Assessment; Geographic scale: National. Case study name: Chesapeake Bay Program; Geographic scale: Regional. Case study name: Great Lakes State of the Lakes Ecosystem Conference; Geographic scale: Regional. Case study name: Minnesota's Department of Natural Resources Strategic Conservation Agenda; Geographic scale: State. Case study name: Environmental Protection Indicators for California; Geographic scale: State. Case study name: Quality of Life Indicator Set, Jacksonville, Florida; Geographic scale: Local. Case study name: Environmental Indicators Project, West Oakland, California; Geographic scale: Local. Source: GAO. [End of table] We conducted semistructured interviews with at least three individuals who were involved in the development, use, and data gathering activities of each environmental indicator set. An additional environmental indicator set was selected to test our interview questions. Semistructured interviews allowed interviewees the opportunity to openly and candidly supply information on a wide range of issues relating to their involvement with the development and use of the environmental indicator set. We also reviewed relevant documents pertaining to the development and use of each of the environmental indicator sets. In addition to providing evidence in the report, the case study information was used to construct case study profiles that were provided to the appropriate program manager for review. The profiles are in appendix III. Meeting of Experts Convened by the National Academy of Sciences: To assess the current state of environmental indicator set development and use, we contracted with the National Academy of Sciences (NAS) to host a 2-day meeting of experts. The selection of experts to participate in the meeting was a two-step process. First, we worked with the NAS staff to identify individuals with expertise in environmental indicator sets. After reviewing the background of each expert, we selected participants using the following criteria: * balance of expertise (e.g., managers, data gathering, developers, users, scientists, researchers, and policymakers); * balance of knowledge across various disciplines (e.g., natural resources, ecology, and agriculture); and: * balance in representation (e.g., federal agencies, state agencies, academia, and nonprofit and private organizations). Based on the availability of the selected participants, we invited 26 experts--representing the geographic levels and sectors--to participate in the meeting held March 9-10, 2004, in Washington, D.C., all of whom attended. Prior to the meeting, we provided the selected experts with background materials that highlighted past reports written by GAO, the National Research Council, and other organizations addressing environmental indicator set issues. The following 26 experts participated in the meeting: Albert Abee: Sustainable Development Coordinator: U.S. Forest Service: James R. Bernard: Environmental Management Consulting: David Berry: Sustainable Water Resources Roundtable: Zach Church: Pennsylvania Department of Environmental Protection, Policy Office: J. Clarence Davies, Ph.D.: Senior Fellow: Resources for the Future: Dennis Fenn, Ph.D.: Center Director: U.S. Geological Survey, Southwest Biological Science Center: Keith G. Harrison, M.A., R.S., Certified Ecologist: Executive Director: Michigan Environmental Science Board: Special Projects Coordinator: Michigan Department of Environmental Quality: R. Lee Hatcher: Managing Director: AtKisson Inc. Theodore Heintz: Indicator Coordinator: White House Council on Environmental Quality: Rainer Hoenicke, Ph.D.: Environmental Scientist: San Francisco Estuary Institute: Robert J. Huggett, Ph.D.: Professor of Zoology, Vice President for Research and Graduate Studies: Michigan State University: Suellen Terrill Keiner, J.D.: Academy General Counsel and Vice President for Academy Programs: The National Academy of Public Administration: Daniel Markowitz, Ph.D.: Associate: Malcolm Pirnie Inc. Gary Matlock, Ph.D.: Director: National Oceanic Atmospheric Administration, National Centers for Coastal Ocean Science: Shelley Metzenbaum, Ph.D.: Executive Director: Environmental Compliance Consortium: Visiting Professor: University of Maryland School of Public Affairs: Patrick O'Brien, Ph.D.: Consulting Environmental Scientist: Chevron-Texaco Energy Technology Company: Robin O'Malley: Senior Fellow: The H. John Heinz III Center for Science, Economics and the Environment: Gordon Orians, Ph.D.: Professor Emeritus: University of Washington: Department of Biology: Duncan Patten, Ph.D.: Research Professor: Montana State University: Big Sky Institute: Marcus Peacock: Associate Director: Office of Management and Budget, Natural Resources, Energy and Science: Dee Peace Ragsdale: Performance and Recognition Manager: Washington Department of Ecology: Mark Schaefer, Ph.D.: President and Chief Executive Officer: NatureServe: Michael Slimak, Ph.D.: Associate Director for Environmental Ecology: U.S. Environmental Protection Agency, National Center for Environmental Assessment: Greg Wandrey, Ph.D.: Director of Product Stewardship: Pioneer Hi-Bred Inc. John R. Wells: Sustainable Development Director: Minnesota Environmental Quality Board: Robin P. White, Ph.D.: Senior Associate: World Resources Institute: During the meeting, experts participated in roundtable sessions and breakout groups to discuss the following: * Why are organizations developing and using environmental indicator sets and what impacts are these sets having in the United States? * What significant scientific, environmental data, communication, and institutional challenges hinder the development and use of environmental indicator sets? * What actions could be taken to overcome the significant challenges to the development and use of environmental indicator sets? The meeting was audio recorded to facilitate transcription. We reviewed the written transcript of the proceedings, the documents produced by experts, and other notes from the 2-day meeting to produce a summary document, which was provided to the experts for review. Their comments were incorporated into the summary, where appropriate. We used the summary document in preparing this report. [End of section] Appendix II: Key Environmental Indicator Initiatives Identified by Experts: Indicator set initiative: Sustainable Development in the United States; Web site: http://clinton1.nara.gov/White_House/EOP/pcsd/ Scale: National. Indicator set initiative: EPA--Draft Report on the Environment; Web site: http://www.epa.gov/indicators/ Scale: National. Indicator set initiative: Sustainable Minerals Roundtable; Web site: http://www.unr.edu/mines/smr/ Scale: National. Indicator set initiative: Sustainable Water Resources Roundtable; Web site: http://water.usgs.gov/wicp/acwi/swrr/ Scale: National. Indicator set initiative: Roundtable on Sustainable Forests; Web site: http://www.sustainableforests.net/info.php; Scale: National. Indicator set initiative: Sustainable Rangelands Roundtable; Web site: http://sustainablerangelands.cnr.colostate.edu/ Scale: National. Indicator set initiative: State of the Nation's Ecosystems; Web site: http://www.heinzctr.org/ecosystems/ Scale: National. Indicator set initiative: Ecological Monitoring and Assessment Program; Web site: http://www.epa.gov/emap/ Scale: National. Indicator set initiative: Ecological Indicators for the Nation; Web site: http://books.nap.edu/catalog/9720.html; Scale: National. Indicator set initiative: Index of Watershed Indicators; Web site: http://www.epa.gov/iwi/ Scale: National. Indicator set initiative: Chemical and Pesticide Results Measures; Web site: http://www.pepps.fsu.edu/CAPRM/ Scale: National. Indicator set initiative: Waste Indicator System for the Environment; Web site: http://www.pepps.fsu.edu/WISE/ Scale: National. Indicator set initiative: America's Children and the Environment; Web site: http://www.epa.gov/envirohealth/children/ Scale: National. Indicator set initiative: National Report on Human Exposure to Environmental Chemicals; Web site: http://www.cdc.gov/exposurereport/ Scale: National. Indicator set initiative: Index of Leading Environmental Indicators; Web site: http://www.aei.org/publications/bookID.407/book_detail.asp; Scale: National. Indicator set initiative: Agricultural Resource and Environmental Indicators; Web site: http://www.ers.usda.gov/publications/arei/ Scale: National. Indicator set initiative: Environmental Public Health Indicators; Web site: http://www.cdc.gov/nceh/indicators/default.htm; Scale: National. Indicator set initiative: Risk-Screening Environmental Indicators; Web site: http://www.epa.gov/opptintr/rsei/ Scale: National. Indicator set initiative: The Status and Trends of Our Nation's Biological Resources; Web site: http://biology.usgs.gov/s+t/SNT/index.htm; Scale: National. Indicator set initiative: National Coastal Condition Report; Web site: http://www.epa.gov/owow/oceans/nccr/ Scale: National. Indicator set initiative: The Status of Biodiversity in the United States; Web site: http://www.natureserve.org; Scale: National. Indicator set initiative: National Estuarine Reserves System Wide Monitoring Program; Web site: http://nerrs.noaa.gov/ Scale: National. Indicator set initiative: National Coastal Management Performance Measurement System; Web site: http://www.ocrm.nos.noaa.gov/ Scale: National. Indicator set initiative: National Park Service--Vital Signs Program; Web site: http://science.nature.nps.gov/im/monitor/index.htm; Scale: National. Indicator set initiative: Relative Sea Level Trends; Web site: http://pubs.usgs.gov/of/2002/of02-233/ppvariables.htm; Scale: National. Indicator set initiative: U.S. Land Cover Trends; Web site: http://gam.usgs.gov/LandUseDynamics/ludatacollection.shtml; Scale: National. Indicator set initiative: Forest Health Monitoring Vegetation Indicator Pilot Program; Web site: http://www.fs.fed.us/na/briefs/fhm99/fhm99.htm; Scale: National. Indicator set initiative: Chesapeake Bay Program; Web site: http://www.chesapeakebay.net; Scale: Regional. Indicator set initiative: State of the Great Lakes Ecosystem Conference; Web site: http://www.epa.gov/glnpo/solec/ Scale: Regional. Indicator set initiative: Environmental Indicators in the Estuarine Environment; Web site: http://www.aceinc.org/ Scale: Regional. Indicator set initiative: Environmental Health Indicators for the U.S.- -Mexico Border; Web site: http://www.fep.paho.org/english/env/Indicadores/IndSA.htm; Scale: Regional. Indicator set initiative: New England Environmental Goals and Indicators Project; Web site: http://www.gmied.org; Scale: Regional. Indicator set initiative: Western Regional Climate Center; Web site: http://www.wrcc.dri.edu/ Scale: Regional. Indicator set initiative: Puget Sound/Georgia Basin Ecosystem Indicators; Web site: http://www.ecy.wa.gov/biblio/0201002.html; Scale: Regional. Indicator set initiative: Southeastern Louisiana Top 10 by 2010 Indicators Report; Web site: http://www.top10by2010.org/ Scale: Regional. Indicator set initiative: North State (California) Vital Signs; Web site: http://www.mcconnellfoundation.org/ Scale: Regional. Indicator set initiative: Mid-Atlantic Integrated Assessment; Web site: http://www.epa.gov/emap/maia/ Scale: Regional. Indicator set initiative: South Florida/Everglades Comprehensive Ecosystem Restoration Plan; Web site: http://www.evergladesplan.org/ Scale: Regional. Indicator set initiative: Tennessee Valley Authority Vital Signs Program; Web site: http://www.tva.gov/environment/reports/envreports/index.htm; Scale: Regional. Indicator set initiative: Pacific Northwest Salmon Habitat Indicators; Web site: http://www.ecy.wa.gov/biblio/99301.html; Scale: Regional. Indicator set initiative: Aquatic Habitat Indicators for the Pacific Northwest; Web site: http://yosemite.epa.gov/R10/ecocomm.nsf/0/ 74476bae1ae7e9fb88256b5f00598b43?OpenDocument; Scale: Regional. Indicator set initiative: Tampa Bay Estuary Program Baywide Environmental Monitoring Report; Web site: http://www.tbep.org/baystate/bemr.html; Scale: Regional. Indicator set initiative: Ecosystem Indicators for the Lake Champlain Basin Program; Web site: http://www.uvm.edu/envnr/indicators/ Scale: Regional. Indicator set initiative: Environmental Protection Indicators for California; Web site: http://www.oehha.ca.gov/multimedia/epic/ Scale: State. Indicator set initiative: Minnesota Environmental Indicators; Web site: http://www.dnr.state.mn.us/eii/index.html; Scale: State. Indicator set initiative: Minnesota Strategic Conservation Agenda; Web site: http://www.dnr.state.mn.us/conservationagenda/index.html; Scale: State. Indicator set initiative: Central Texas Sustainability Indicators Initiative; Web site: http://www.centex-indicators.org/ Scale: State. Indicator set initiative: Pennsylvania Environmental Futures Planning; Web site: http://www.dep.state.pa.us/hosting/efp2/PDF_ICF_EFP2X/ priorities.htm; Scale: State. Indicator set initiative: State of the Texas Environment Report; Web site: http://www.tnrcc.state.tx.us/ Scale: State. Indicator set initiative: Texas Index of Leading Environmental Indicators 2000; Web site: http://www.texaspolicy.com/ research_reports.php?report_id=143&loc_id=1; Scale: State. Indicator set initiative: Texas Environmental Almanac; Web site: http://www.texascenter.org/almanac/ Scale: State. Indicator set initiative: Water for Texas; Web site: http://www.twdb.state.tx.us/publications/reports/ State_Water_Plan/2002/FinalWaterPlan2002.htm; Scale: State. Indicator set initiative: Utah Air Monitoring--Mobile Sources; Web site: http://www.airmonitoring.utah.gov/amc.htm; Scale: State. Indicator set initiative: Ambient Air Monitoring Program; Web site: http://www.epa.gov/air/oaqps/qa/monprog.html; Scale: State. Indicator set initiative: Minnesota Milestones; Web site: http://www.mnplan.state.mn.us/mm/ Scale: State. Indicator set initiative: Oregon Shines; Web site: http://egov.oregon.gov/DAS/OPB/os.shtml; Scale: State. Indicator set initiative: Florida Assessment of Coastal Trends; Web site: http://www.pepps.fsu.edu/FACT/ Scale: State. Indicator set initiative: Washington Department of Ecology; Web site: http://www.ecy.wa.gov/ Scale: State. Indicator set initiative: Oregon State of the Environment Report; Web site: http://egov.oregon.gov/DAS/OPB/soer2000index.shtml; Scale: State. Indicator set initiative: State of Kentucky's Environment; Web site: http://www.eqc.ky.gov/pubs/soke/ Scale: State. Indicator set initiative: Illinois Department of Environmental Quality indicators; Web site: http://www.dnr.state.il.us/orep/NRRC/balancedgrowth/ indicators.htm; Scale: State. Indicator set initiative: Environmental Indicators for Delaware Estuary; Web site: http://www.epa.gov/owow/estuaries/coastlines/jan02/ envindicator.html; Scale: State. Indicator set initiative: Indicators of Livable Communities; Web site: http://www.mdf.org/megc/pubs/livable_communities.htm; Scale: State. Indicator set initiative: Oregon's First Approximation Report; Web site: http://www.oregonforestry.org/sustainability/ first_approximation_report.htm; Scale: State. Indicator set initiative: Sustainable Development Indicators for Pennsylvania; Web site: http://www.paconsortium.state.pa.us/ pointing_pa_sustainable_future. htm; Scale: State. Indicator set initiative: New Jersey Hudson Bay Environmental Indicators Initiatives; Web site: http://www.harborestuary.org/reports/harborh.htm; Scale: State. Indicator set initiative: Everglades Comprehensive Annual Report; Web site: http://www.sfwmd.gov/org/ema/everglades/ Scale: Local. Indicator set initiative: The State of the Bay--a Characterization of the Galveston Bay Ecosystem; Web site: http://www.tnrcc.state.tx.us/admin/topdoc/pd/020/02-04/ galvestonbay.html; Scale: Local. Indicator set initiative: Index of Silicon Valley; Web site: http://www.jointventure.org/resources/2002Index/ Scale: Local. Indicator set initiative: Santa Monica Sustainable City Plan; Web site: http://santa-monica.org/epd/scp/ Scale: Local. Indicator set initiative: Current Status and Historical Trends of Selected Estuarine and Coastal Habitats in Corpus Christi Bay National Estuary Program Study Area; Web site: http://www.sci.tamucc.edu/ccs/ Scale: Local. Indicator set initiative: Bay Area Alliance for Sustainable Communities; Web site: http://www.bayareaalliance.org/ Scale: Local. Indicator set initiative: Bay Institute; Web site: http://www.bay.org/main.htm; Scale: Local. Indicator set initiative: Bay Area EcoAtlas and Pulse of the Bay report; Web site: http://www.sfei.org/ Scale: Local. Indicator set initiative: Mecklenburg County State of the Environment Report; Web site: http://www.charmeck.org/Departments/LUESA/ Water+and+Land+Resources/State+of+the+Environment+Report.htm; Scale: Local. Indicator set initiative: Sustainable Seattle--Indicators of Sustainable Community; Web site: http://www.sustainableseattle.org/Publications/ 40indicators.shtml; Scale: Local. Indicator set initiative: Legacy 2002--Greater Orlando; Indicators Report; Web site: http://www.hcbs.org/moreInfo.php/source/62/sby/Author/doc/ 251/Legacy_2002_-_Greater_Orlando_Indicator's_Report_-; Scale: Local. Indicator set initiative: Sierra Nevada Wealth Index; Web site: http://www.sbcouncil.org/wealth.htm; Scale: Local. Indicator set initiative: Sustainable Nantucket--a Compass for The Future; Web site: http://indicators.sustainablenantucket.org/intro.cfm; Scale: Local. Indicator set initiative: Community-based Environmental Health Assessment Program; Web site: http://www.naccho.org/general955.cfm; Scale: Local. Indicator set initiative: Multnomah County--Benchmarks; Web site: http://www.portlandonline.com/auditor/ index.cfm?&a=39665&c=27347; Scale: Local. Indicator set initiative: Valley Vision (California); Web site: http://www.calregions.org/civic/partners/mid-vvr.html; Scale: Local. Indicator set initiative: King County Benchmarks; Web site: http://www.metrokc.gov/budget/benchmrk/bench03/ Scale: Local. Indicator set initiative: State of Boston Harbor; Web site: http://www.mwra.state.ma.us/harbor/html/2002-09.htm; Scale: Local. Indicator set initiative: West Oakland--Environmental Indicators; Web site: http://www.neip.org/ Scale: Local. Indicator set initiative: Jacksonville Community Council Inc. Quality of Life Indicators; Web site: http://www.jcci.org; Scale: Local. Source: GAO. Note: Web addresses are current as of August 10, 2004. [End of table] [End of section] Appendix III: Environmental Indicator Set Case Study Profiles: We conducted eight in-depth case studies of environmental indicator sets over the course of the review. We reviewed two environmental indicator sets at each of the following geographic scales: national, regional, state, and local. The indicator sets profiled are: 1. The Heinz Center's State of the Nation's Ecosystems; 2. EPA's National Coastal Assessment; 3. Chesapeake Bay Program; 4. Great Lakes State of the Lakes Ecosystem Conference; 5. Minnesota's Department of Natural Resources Strategic Conservation Agenda; 6. Environmental Protection Indicators for California; 7. Quality of Life Indicator Set, Jacksonville, Florida; and: 8. Environmental Indicators Project, West Oakland, California. Each profile contains a brief overview of the program, the process of development, the use and impact of the indicator set, and next steps planned for the indicator set. The Heinz Center's State of the Nation's Ecosystems: Overview: In early 1997, as a follow up to a major review of federal environmental monitoring efforts, the White House Office of Science and Technology Policy (OSTP) requested that the H. John Heinz III Center for Science, Economics, and the Environment (Heinz Center)--a nonprofit institution--develop a nonpartisan, science-based report on the state of the nation's environment. The Heinz Center lists 103 indicators in the set, with approximately 15 indicators for each of 6 major ecosystem types (Coasts and Oceans, Farmlands, Forests, Fresh Waters, Grasslands and Shrublands, Urban and Suburban Areas) and 10 additional core national indicators that provide a broad yet succinct view of the national ecosystem condition and use. The indicator set is national in scope with limited breakout by regions. The indicators focus on the condition of ecosystems that support policy debate and decision-making at the national scale. The environmental indicator set information was disseminated through a report in 2002 (see fig. 5) that was issued simultaneously in print and on the Web. Figure 5: The State of the Nation's Ecosystems Report: [See PDF for image] [End of figure] Development: The Heinz Center assembled a small in-house staff and a large team of part-time contributors drawn from government, the private sector, environmental organizations, and academia. A design committee oversaw the entire project and technical work groups, which provided expertise in particular ecosystems, identified the indicators and selected and assessed the data sources. Overall, nearly 150 individuals participated in the project as committee and group members, with many more participating as contributors, reviewers, and advisers. The committee selected indicators that could provide a broad, balanced description of each ecosystem type based on 10 characteristics that covered the physical dimensions of the systems, their chemical and physical conditions, the status of their biological components, and the amounts of goods and services people receive from them. Once the committee chose an indicator and identified relevant sources of data, it reviewed the data based on the following three criteria: (1) Data had to be of sufficient quality to provide a scientifically credible description of actual ecosystem conditions; (2) data had to have adequate geographic coverage to represent the state of the nation's ecosystems; and (3) data had to be collected through an established monitoring program that offered a reasonable likelihood of future data availability. Use: The indicator set highlights the need for a comprehensive view of ecosystem condition and change and the need for additional information to fill the gaps in data available to describe key aspects of the nation's ecosystems. The major use to date has been by managers of major monitoring systems who are using it in designing their collection and reporting systems. Next steps: The 2002 report was the first in what is intended to be a regular series of reports on the state of the nation's ecosystems every 5 years. The next edition in the series is planned for issuance in 2007. Between the issuance of major editions, substantial revisions-- such as the incorporation of new data sets--will be issued in a periodic update on the Web. Before the next version is published, Heinz Center staff will fill data gaps and improve the consistency of both data and indicators; consult with key scientific communities in order to refine and clarify certain indicators; work with public and private agencies to regularly provide data in the form needed for national reporting; and strengthen the linkages between the Heinz Center project and other efforts related to ecosystem reporting. EPA's National Coastal Assessment: Overview: In 1988, the Environmental Protection Agency's (EPA) Science Advisory Board charged the Office of Research and Development (ORD) to develop a nationally consistent way to report on the condition of coasts for the purpose of Clean Water Act Section 305(b) reporting. ORD's Environmental Monitoring and Assessment Program (EMAP), which involved the efforts of several other federal agencies, developed the National Coastal Assessment (NCA) indicator set and monitoring program. The program was implemented in 2000 as a 5-year effort to evaluate the assessment methods and environmental indicators that ORD had developed to advance the science of ecosystem condition monitoring and evaluation. The program created an integrated, comprehensive coastal monitoring program and environmental indicator set among the coastal states to assess the condition of the nation's estuaries and offshore waters. Through strategic partnerships with 24 coastal states using a compatible, probabilistic design and a common set of survey indicators, each of the 24 states involved in the NCA program have conducted the survey and assessed the conditions of their respective coastal resources. These assessments in turn can be aggregated to assess conditions at the EPA regional, biogeographical, and national levels. The NCA includes five aggregate indicators--water quality, sediment quality, coastal habitat, benthic community structure, and fish tissue contaminants--based on 200 to 250 separate measurements. The indicators cover a range of geographic scales--state, regional, biogeographical, and national. The indicators focus on showing the condition of estuaries and the association between condition and stressors (effects). As such, the indicators are based on science rather than on administrative policy performance. The states report the indicators through state Section 305(b) reports to EPA, which submits them to Congress. The indicators are also aggregated with other data collection efforts and reported through the National Coastal Condition Report (see fig. 6). Figure 6: Draft National Coastal Condition Report II: [See PDF for image] [End of figure] Development: A number of pilot projects held over a 10-year period at different geographic areas, helped identify and develop the indicators. The indicators were developed based on 15 guidelines organized around four evaluation phases: conceptual relevance, feasibility of implementation, response variability, and interpretation and utility. Use: The NCA indicator set and monitoring program are used by 24 marine coastal states and Puerto Rico to provide an assessment of estuary conditions for the purposes of Clean Water Act Section 305(b) reporting. Before development of NCA, states or territories had little or no coastal monitoring in place and no mechanism to evaluate the condition of the resource. The NCA indicators provided states with a small set of indicators that are adaptable to the specific needs of the state utilizing them. Three coastal states have fully implemented the NCA monitoring and indicator approach to fulfill Section 305(b) reporting requirements; the other 21 states either are just beginning to implement the approach or have used the approach to assess a part of their estuaries. Next steps: The 5-year NCA program is set to expire in the summer of 2004, after which the EPA Office of Water may take over the program. At the end of the period, ORD officials will evaluate the effectiveness of the program and provide assistance to the Office of Water as needed. ORD is currently structuring monitoring programs and indicator development to provide tools to states to monitor and evaluate not only the conditions of waters for reporting purposes (Section 305(b)) but also for other provisions of the Clean Water Act, such as nonpoint source control (Section 319), Total Maximum Daily Loads allocation (Section 303(d)), and the National Pollutant Discharge Elimination System permitting program (Section 402). Chesapeake Bay Program: Overview: In 1991, the Chesapeake Bay Program Office, headed by EPA, began developing environmental indicators to support goal setting, to define targets and end points for restoration of the bay, and to make the program more accountable to the public by defining and communicating the bottom-line environmental results achieved by the restoration program. The EPA coordinates the development, revision, and updates of the environmental indicators with more than 50 federal, state, and local government agencies and nongovernmental organizations that participate as bay program partners. The bay program carries out its work through a series of committees, advisory committees, and subcommittees. A basic tenet of the bay program's environmental indicators effort is that environmental indicators (outcome measures) need to be clearly associated with strategic goals for the program. As such, the bay program has developed a framework for linking environmental outcome measures to strategic program goals. The Chesapeake Bay Program currently utilizes nearly 90 environmental indicators to gauge the Chesapeake Bay's environmental condition and progress made in restoration. The Chesapeake Bay Program organizes the indicators into six levels that range from indicators that measure management actions- -such as implementing advanced treatment of wastewater to reduce nutrient discharges--to those that are direct or indirect measures of ecological or human health. The indicators are further categorized into a performance measure; context indicator; emerging science indicator; or pressure, state, or response indicator (these indicators are based on a concept of causality, where human activities place pressures on the environment that cause a change in the state of the environment; these changes alert society, which then implements a response to reduce the pressures or to change the affected environment). Environmental indicator set information is reported by a variety of mechanisms, such as briefing packages, presentations, and fact sheets, and a triennial State of the Chesapeake Bay report (see fig. 7). Figure 7: The State of the Chesapeake Bay Report: [See PDF for image] [End of figure] Development: The process of developing and subsequently adding, deleting, or modifying indicators is collaborative and includes hundreds of individuals working through bay program committees, subcommittees, and work groups. The criteria for indicator selection are: (1) data availability; (2) environmental results measure; (3) management needs; (4) and request of the public. Indicators are developed to measure performance of restoration goals, which have been primarily established through three overriding Chesapeake Bay agreements. The most recent of the agreements--the Chesapeake 2000 agreement--establishes many goals to be achieved by 2010. Use: Goal setting through Chesapeake Bay agreements has given the Chesapeake Bay Program an important tool to develop and use indicators that improve its ability to garner and target resources and to evaluate the bay program's management strategies. The indicator set also presents information to the public on the condition of the Chesapeake Bay through various reporting mechanisms. The environmental indicator set has supported goal setting for the bay program both in longer-term strategic implementation plans and for annual planning and budgeting. Next steps: The Chesapeake Bay Program office plans to develop more river-specific or subwatershed indicators in addition to baywide average indicators. They also plan to modify, replace or develop new indicators as necessary to measure goals in the Chesapeake 2000 agreement, fill key gaps in the indicators hierarchy and continuum to complete the "cause and effect picture" for the watershed, and initiate the development of sustainable development indicators that reflect stewardship and land use. Great Lakes State of the Lakes Ecosystem Conference: Overview: The Great Lakes Water Quality Agreement of 1978, as amended, calls for the development of a set of comprehensive ecosystem health indicators for the Great Lakes. Accordingly, the indicator set is meant to be used to inform the public and report progress in achieving the objectives of the agreement. The indicators are reviewed and discussed every 2 years at the State of the Lakes Ecosystem Conference (SOLEC), hosted by the EPA and Environment Canada in response to a reporting requirement of the agreement. The two governments established SOLEC in 1992 to report on the state of the Great Lakes ecosystem and the major factors impacting it, and to provide a forum for exchange of this information among Great Lakes decision makers. In the year following each conference, the governments prepare a report on the state of the lakes based in large part upon the conference process and environmental indicators discussed there. The first conference was held in 1994, and the first comprehensive basinwide set of indicators was developed after the 1996 conference. The 1998 SOLEC conference was the first to utilize a comprehensive set of indicators. Approximately 80 indicators address specific geographic zones of the entire Great Lakes Basin ecosystem, such as offshore, nearshore, coastal wetlands, and shoreline and address issues, such as human health, land use, and societal. The indicators are based on a pressure- state-response (PSR) model--a causality framework where human activities place pressures on the environment that cause a change in the state of the environment; these changes alert society, which then implements a response to reduce the pressures or to change the affected environment. These changes alert society, which then implements a response to reduce the pressures or to change the affected environment. The indicators are reported primarily through biennial State of the Great Lakes reports (see fig. 8). Figure 8: State of the Great Lakes Report: [See PDF for image] [End of figure] Development: Over 130 experts participated in the development and selection of indicators. Experts divided into seven core groups, which directly selected and developed indicators or reviewed draft products throughout the process for the more than 850 indicators they identified. Expert panels initially screened the indicators according to the criteria--necessary, sufficient, and feasible--and then analyzed them for validity, understandability, interpretability, information richness, data availability, timeliness, and cost considerations. This vetting process reduced the number of indicators to 80. The Great Lakes indicator set draws upon and complements indicators used for more specific purposes, such as management plans created for individual lakes. Use: The indicator development and revision process has in itself proved beneficial by providing to scientists, resources managers, and the public a forum in which to discuss and better understand the conditions of the Great Lakes and the impacts affecting its quality. The SOLEC indicator set has also identified key data gaps and has spurred collaborative monitoring efforts between the United States and Canada. Next steps: In order to establish a consistent, easily understood indicator set, EPA and Environment Canada will continue to review and refine the indicator set. Indicators are currently being grouped into bundles to reduce and organize essential information to a few understandable topics. EPA and Environment Canada also plan to build appropriate monitoring and reporting activities into existing Great Lakes programs at the federal, provincial, state, tribal, and industry levels to fully report on all of the approximately 80 indicators. Minnesota Department of Natural Resources' Strategic Conservation Agenda: Overview: In 2003, the Minnesota Department of Natural Resources (DNR) began the development of a Strategic Conservation Agenda (SCA) indicator set in response to a directive from the DNR Commissioner's office to strengthen accountability and public confidence by better communicating progress toward conservation results. The objective of the SCA was to provide internal management direction for defining agency-level performance goals, demonstrating accountability to citizens, and fulfilling the governor's expectations for agency accountability to results. The SCA is one piece in a larger policy hierarchy as it fits within a DNR mission statement and strategic plan, and the department's budgeting process. The SCA indicator set includes about 75 indicators that target natural resource conditions, DNR management activities, and results toward which DNR will strive through management efforts. The indicator set does not represent all of the natural resources in Minnesota but the areas in which DNR will commit resources to achieve specific results. The SCA indicators measure natural resource trends or resource work performed. The SCA indicator set is defined by six key performance areas at DNR: Natural Lands, Fisheries and Wildlife, Healthy Waters and Watersheds, Forests, Outdoor Recreation, and Natural Resources Stewardship Education. Targets are assigned to each indicator to define expected results and serve as specific milestones that help DNR gauge progress toward long-term goals. Environmental indicator set information was presented in the first SCA report (see fig. 9), which was issued to the public through the DNR Web site in March 2004. Figure 9: The Strategic Conservation Agenda Report: [See PDF for image] [End of figure] Development: DNR developed the indicator set through a multistep, agencywide process under the direction of the DNR Commissioner's Office. The Science Policy Unit, housed in DNR's Office of Management and Budget Services, worked with DNR operations managers representing all DNR divisions and regions to develop the indicators. The model used by DNR for the selection of indicators was based on prior work through the Minnesota Environmental Indicators Initiative, which existed from 1995 through 2000. The DNR relied on that past work to select indicators for its focused use. Indicators were selected within goal areas established in DNR's strategic planning process called Directions. Different DNR divisions provided a menu of existing and new indicators along with initial targets. The targets state strategic goals in specific and measurable terms where indicators track progress and document results. Senior management at DNR then reviewed, modified as needed, and approved a final set of indicators that were designed to be measurable, accurate, meaningful, and compelling. Use: DNR uses the indicator set to assist in management decision making, to communicate how DNR programs are achieving results, and to provide accountability to citizens. For example, the indicator "number of cords of wood offered for sale on DNR lands" allows DNR to set targets to ensure a predictable, sustainable supply of quality wood. The indicator would be reported on and tracked by DNR as well as the public to evaluate management practices and be held accountable for sustaining timber supplies. DNR staff's involvement in the process of development has provided them an opportunity to think about natural resource management along the dimensions of performance measurement. Next steps: DNR will update the indicators periodically. Existing indicators will be tracked over time to chart and report progress toward conservation targets. New indicators will be added to fill information gaps. DNR will work with the public to adjust targets as conditions change and develop new targets as opportunities arise to better conserve natural resources. Environmental Protection Indicators for California: Overview: The California Environmental Protection Agency (Cal/EPA) developed the Environmental Protection Indicators for California (EPIC), in response to the agency's July 2000 Strategic Vision document that committed the agency to manage for environmental results as well as to adopt environmental indicators as a priority. The environmental indicators in EPIC were developed for the purposes of strategic planning, policy formulation, resource allocation, and priority setting under a results-based management system. The EPIC project developed an initial set of indicators based on issue categories that generally mirror Cal/EPA areas of authority. EPIC is designed to measure the pressures exerted on the environment in California by human activities and ambient environmental conditions, as well as the resulting effects on human and ecological health in California. Most of the indicators focus on environmental resources at the state level. Global or transboundary issues that affect the state, such as global climate change, are also included in EPIC. In total, Cal/EPA identified about 85 indicators for inclusion in EPIC. The indicators are organized into six levels that range from indicators measuring management actions to those that are direct or indirect measures of ecological or human health. The indicators were presented by Cal/EPA and the California Resources Agency in an April 2002 report (see fig. 10) and a shorter summary document created to provide a more general overview of the project and the indicators. Figure 10: Environmental Protection Indicators for California Report: [See PDF for image] [End of figure] Development: The EPIC project began in January 2001 with a conference designed to engage individuals other than those in the participating state agencies in discussions about the areas the indicators should address. Cal/EPA's Office of Environmental Health Hazard Assessment was designated by Cal/EPA to lead and oversee EPIC as a whole. The offices, departments and boards within Cal/EPA participated in the development of EPIC by identifying data sources and developing indicators. In addition, recognizing the need of EPIC to address environmental protection issues in tandem with resource management issues and the interplay between environment and human health, both the California Resources Agency and the California Department of Health Services collaborated in the development. Approximately 130 individuals representing various groups were involved in the selection and development of the indicators--an external advisory group, interagency advisory group, project staff, and seven work groups. Within each issue area, work groups identified parameters that could be used to derive candidate indicators. The indicators they developed in the various issue areas were subject to criteria that included data quality, representativeness, sensitivity, and decision support. Indicators that met criteria were further evaluated as to whether data are available to present a condition or trend for the issue area. Indicators were then classified into three categories according to the availability of data that are collected on a systematic basis. Use: Because EPIC's primary purpose is to evaluate Cal/EPA programs, Cal/EPA has begun to use the indicators in a pilot project to institute a performance management system. The project was scheduled for completion in June 2004. Participants in the indicator development process stated that EPIC helped to get the agency to initiate discussion between boards and departments on what indicators were available, and how the agency could begin to measure results. The process also helped to identify data gaps. Next steps: California has suspended funding for the EPIC project. Cal/ EPA staff, however, will continue to evaluate the current set of indicators, identify new indicators, revise and replace existing indicators as appropriate, and publish a progress report outlining its activities on a regular basis. Quality of Life Indicator Set, Jacksonville, Florida: Overview: The Jacksonville Community Council Inc. (JCCI)--a nonprofit organization in Florida--started the Quality of Life indicator set in 1985 to measure the quality of community life and identify aspects of the community that, if improved, would yield significant benefits. As an indicator set, the Quality of Life Progress Report provides information about the community by showing its history, its current status, and the areas requiring attention to reach the Jacksonville's goals. The Quality of Life indicator set provides a source of local, summary-level information about Jacksonville. Each annual update represents the community's report card, containing information used to inform the community, ensure public accountability, and guide decision makers to help promote and enhance the quality of life for all citizens in the community. The Quality of Life project initially identified about 75 indicators to track. The latest report (see fig. 11) included 115 indicators focusing on nine areas: environment, economy, education, government, health, recreation, safety, social well-being, and transportation. Each of these areas contains between 8 and 19 individual indicators. The geographic scale of reporting includes Duval County, which encompasses Jacksonville's metropolitan area. Figure 11: Jacksonville's 2003 Quality of Life Progress Report: [See PDF for image] [End of figure] Development: The Quality of Life project began with the efforts of a citizen's task force, composed of about 100 individuals. The Chairman of the JCCI chose the head of a steering committee, which then selected committee members based on their volunteer experiences, leadership capabilities, and areas of expertise. The steering committee formed subcommittees/task forces for nine basic quality of life topic areas. For each topic area, the group selected various indicators based on the following criteria: validity; availability and timeliness; stability and reliability; understandability; responsiveness; policy relevance; and representativeness. The task forces periodically update the indicators and the associated targets. There was an update process carried out in 2000 that consumed almost 90 meetings over 6 months. The process included volunteers from various groups to assist in the review. Efforts have already been completed to revise the indicators, identify linkages, and set targets for 2005. Use: The Quality of Life indicator set was developed to help track the progress that Jacksonville is making toward meeting established environmental and other goals. To this end, the City Council, Chamber of Commerce, citywide departments, and others all use this information. The biggest impact of the indicator set has been its ability to educate the public, highlight the environment, and increase community awareness of the environmental issues facing Jacksonville. In addition, the Quality of Life report has provided the essential information for decision makers to address various issues. Next steps: JCCI will continually revise and update the indicators and associated targets to include in its annual progress reports. Recently, JCCI has begun developing indicators that address key issues in the community, such as illiteracy and racial disparity. In addition, JCCI has developed a Replication Kit for communities interested in establishing an indicator project, and provided direct consulting practices. Environmental Indicators Project, West Oakland, California: Overview: The Environmental Indicators Project (EIP) was created to assist policymakers and residents to use indicator information to initiate a dialogue among residents, policymakers, and the private sector to improve quality of life and create a healthy, safe environment in West Oakland, California. Community participation in the EIP development process was a critical part of achieving this goal. The EIP began in 2000 with the partnership of the Pacific Institute (a nonprofit organization) and a West Oakland neighborhood organization. The EIP's 17 indicators represent a broad range of environmental concerns in the community, from issues of air quality and toxics to environmental health, land use, housing affordability, transportation, and civic engagement. The EIP includes "environmental indicators" that are broadly defined in an effort to integrate environmental measures with the community's social and economic well-being. The indicators are reported through indicator reports (see fig. 12) to the community and through brochures on groups of indicators relevant to specific community campaigns. Figure 12: West Oakland's Neighborhood Knowledge for Change Report: [See PDF for image] [End of figure] Development: The EIP established a task force of neighborhood residents to identify a core set of indicators that address issues of importance in the neighborhood. Participation in the indicator development process broadened community involvement beyond the staff of the community-based organizations to include residents who had previously not had access to such information. Task force members selected and developed the indicators by defining the term "environment" in the context of West Oakland; identifying environmental issues in the community; selecting the indicators community members would want to measure and track; and determining how such information could be incorporated into current advocacy, policy, and education. The Pacific Institute's team of researchers then collected and analyzed data from city, county, state, and national agencies to develop the indicators. An additional four indicators were selected by the community as important but were not reported on, either because (1) data were not available or (2) the available data were not reliable, consistent, or regularly updated. The EIP released its report in January 2002 and also designed brochures on groups of indicators relevant to the campaigns to make the information more accessible and understandable to community stakeholders, and to help educate residents on community advocacy efforts. Use: Residents, policymakers, and agencies have used indicator information to begin to improving the quality of life for West Oakland residents. For example, the indicators provided evidence that a Red Star Yeast factory that was located in the community was releasing illegal amounts of toxic air pollutants and was subsequently closed. The EIP has also been valuable to the work of numerous community campaigns and in working with agencies because community testimonials can now be combined with the information presented through the indicators. Next steps: The Pacific Institute will continue to work with community partners to develop a system that ensures that indicators remain accessible to, and are used by, the community. The Pacific Institute also plans to update the existing indicators and incorporate new ones as necessary. [End of section] Appendix IV: Selected Activities Identifying Need for More Comprehensive Environmental Information: The following tables summarize major congressional attempts to address federal environmental data and indicator issues since 1970, as well as selected academic reports issued during the same period. None of the tables are exhaustive. Rather, the purpose of these lists is to illustrate significant legislative and academic milestones in federal environmental data and indicator management over the last 35 years. While there have been numerous such efforts, both Congress and the academic community had already identified and analyzed many of the fundamental issues confronting indicator development and data management by the close of the 1970s. Perhaps the most significant recent development is the focus since 1990 on the creation of an objective, nonpolitical environmental statistical agency within the federal government, an idea that has appeared in several recent legislative proposals to elevate the Environmental Protection Agency to Cabinet level. Two bills to elevate the EPA, one of which would establish a Bureau of Environmental Statistics, were introduced in the 108th Congress. Selected Legislation to Address Federal Environmental Data and Indicator Issues: Table 7 presents selected Congressional bills introduced since 1970 that deal with significant challenges involving federal environmental data management and indicator development. While Congress has been examining how best to address these challenges for some time, legislative consensus has yet to emerge on many key topics, including whether a Bureau of Environmental Statistics should be established--and if so, whether it should be done as part of legislation to elevate EPA to Cabinet status. Table 7: Major Pieces of Legislation to Address Federal Environmental Data and Indicator Issues, 1970-2004: Year introduced: 1970; Bill: H.R. 17436; Principal provisions: Would amend the National Environmental Policy Act of 1969 to create a National Environmental Data System to serve as the central national coordinating facility for the storage, analysis, and retrieval of environmental information to support environmental decisions in a timely manner. Would require each federal agency to make environmental data available to the Data System and would require data in the Data System to be available to Congress, federal agencies, states, and the public. The system would be operated by a director under the guidance of the Council on Environmental Quality. It would develop and publish environmental quality indicators for all of the regions in the United States; Last action: Passed House and referred to the Senate Committee on Commerce. Year introduced: 1970; Bill: H.R. 18141; Principal provisions: Similar to H.R. 17436. Would amend the National Environmental Policy Act of 1969 to provide for a National Environmental Data Bank for all data relating to the environment; Last action: Hearing before the Subcommittee on Fisheries and Wildlife Conservation, House Committee on Merchant Marine and Fisheries. Year introduced: 1971; Bill: H.R. 56; Principal provisions: Would create a National Environmental Data System that would provide for the development and utilization of information needed to support management of the environment. The Data System would serve as the central national facility for the selection, storage, analysis, retrieval, and dissemination of information, knowledge, and data specifically related to the environment. Would require data in the Data System to be available to Congress, federal agencies, states, and the public. The Data System would be operated by a director under the guidance of the Council on Environmental Quality and it would develop and publish environmental quality indicators for all of the regions in the United States; Last action: Pocket Veto by President Richard Nixon. Year introduced: 1984; Bill: H.R. 5958; Principal provisions: Would establish a National Commission on Environmental Monitoring to (1) investigate and study the nation's environmental monitoring programs and those international monitoring programs in which the United States participates; (2) recommend to Congress and the President a plan to improve environmental monitoring; and (3) advise and assist in the preparation of an environmental monitoring report; Last action: Referred to subcommittees of House Committee on Merchant Marine and Fisheries and House Committee on Science and Technology. Year introduced: 1990; Bill: H.R. 3847; Principal provisions: Would redesignate the Environmental Protection Agency as the Department of Environmental Protection and establish within it a Bureau of Environmental Statistics. Would require the Secretary of the department to establish an Advisory Committee on Environmental Statistics to (1) advise the director of the bureau and Congress on the collection and dissemination of statistical data, and (2) ensure that the statistics and analyses reported by the bureau are of high quality, publicly accessible, and not subject to political influence; Last action: Passed House and referred to Senate Committee on Governmental Affairs. Year introduced: 1990; Bill: H.R. 3904; Principal provisions: Would establish the National Environmental Institute Commission to (1) make recommendations to the President and Congress for the establishment of a National Environmental Institute, a Bureau of Environmental Information and Statistics, and an organization to examine public policies that affect the environment; and (2) identify areas of research that require long-term efforts to mitigate serious risk to the environment; Last action: Referred to Subcommittee on Natural Resources, Agricultural Research, and Environment, House Committee on Science, Space, and Technology. Year introduced: 1990; Bill: S. 2006; Principal provisions: Would elevate the Environmental Protection Agency to Cabinet-level status and rename the agency as the Department of the Environment. Would establish a Bureau of Environmental Statistics within the Department and create an Advisory Council on Environmental Statistics to advise the bureau on statistics and analyses, including whether the statistics and analyses disseminated by the bureau (1) were of high quality, and (2) were based upon the best available objective information. It also would authorize the Secretary of the Environment to make grants to, and enter into contracts with, state and local governments to assist in data collection; Last action: Placed on Senate Calendar. Year introduced: 1991; Bill: S. 533; Principal provisions: Would elevate the Environmental Protection Agency to Cabinet-level status and rename the agency as the Department of the Environment. Would establish a Bureau of Environmental Statistics within the Department and create an Advisory Council on Environmental Statistics to advise the bureau on statistics and analyses. It also would authorize the Secretary of the Environment to make grants to, and enter into contracts with, state and local governments to assist in statistic data collection. Would also direct the Secretary to enter into an agreement with the National Academy of Sciences for a study and report on the adequacy of the department's data collection procedures and capabilities; Last action: Passed Senate and referred to the House Committee on Government Operations. Year introduced: 1991; Bill: S. 2132; Principal provisions: Would require the Environmental Protection Agency to conduct a research program in environmental risk assessment in order to (1) ensure that the risk assessment process is based upon adequate environmental data and scientific understanding, and (2) provide for the most cost-effective use of environmental protection resources. Would direct the Administrator to conduct an environmental monitoring and assessment program to (1) design and evaluate methods and networks to collect monitoring data on the current and changing condition of the environment, (2) implement monitoring programs and manage data from such programs in formats readily accessible to the public, and (3) provide annual statistical reports of the results of such programs to Congress and the public; Last action: Senate Committee on Environment and Public Works held hearing. Year introduced: 1993; Bill: H.R. 109; Principal provisions: Would establish the Department of the Environment and create a Bureau of Environmental Statistics within the department to (1) compile, analyze, and publish a comprehensive set of environmental quality statistics, which should provide timely summary in the form of aggregates, multiyear averages, or totals and include information on the nature, source, and amount of pollutants in the environment and the effects of those pollutants on the public and the environment; (2) promulgate guidelines to ensure that information collected is accurate, reliable, relevant, and in a form that permits systematic analysis; (3) coordinate the collection of information by the department for developing statistics with related information- gathering activities conducted by other federal agencies; (4) make the bureau's published statistics readily accessible; and (5) identify data gaps, review the gaps at least annually with the Science Advisory Board, and make recommendations to the department concerning research programs to provide information to fill the data gaps identified; Last action: Referred to the Subcommittee on Legislation and National Security, House Committee on Government Operations. Year introduced: 1993; Bill: H.R. 3425; Principal provisions: Would establish a Department of Environmental Protection and a Bureau of Environmental Statistics within the department to (1) collect, compile, analyze, and publish a comprehensive set of environmental quality and related public health, economic, and statistical data for determining environmental quality and related measures of public health, over both the short and long term, including assessing ambient conditions and trends and the distribution of environmental conditions and related public health conditions; (2) evaluate the adequacy of available statistical measures to determine the department's success in fulfilling statutory requirements; (3) ensure that data and measures referred to in this subsection are accurate, reliable, relevant, and in a form that permits systematic analysis; (4) collect and analyze such other data as may be required to fulfill the bureau's responsibilities and identify new environmental problems; (5) conduct specialized analyses and prepare special reports; and (6) make readily accessible all publicly available data collected; Last action: Failed on House floor. Year introduced: 1993; Bill: S. 171; Principal provisions: Would establish the Department of Environmental Protection and provide for a Bureau of Environmental Statistics within the department, as well as a presidential commission on improving environmental protection. Would require the bureau to issue an annual report on (1) statistics on environmental quality; (2) statistics on the effects of changes in environmental quality on human health and nonhuman species and ecosystems; (3) documentation of the method used to obtain and assure the quality of the statistics presented in the report; (4) economic information on the current and projected costs and benefits of environmental protection; and (5) recommendations on improving environmental statistical information. Would authorize the department to make grants to, and contracts with, state and local governments, Indian tribes, universities, and other organizations to assist in data collection. Would abolish the Council on Environmental Quality and transfer all of the council's functions to the Secretary of the new department; Last action: Passed Senate; not voted upon in House. Year introduced: 2000; Bill: H.R. 4757; Principal provisions: Would require the Environmental Protection Agency to establish an integrated environmental reporting system, including a National Environmental Data Model that describes the major data types, significant attributes, and interrelationships common to activities carried out by the Administrator or state, tribal, and local agencies (including permitting, compliance, enforcement, budgeting, performance tracking, and collection and analysis of environmental samples and results). Would require EPA to use the model as the framework for databases on which the data reported to the Administrator through the integrated system would be kept; Last action: Referred to Subcommittee on Health and Environment, House Committee on Transportation and Infrastructure. Year introduced: 2000; Bill: H.R. 5422; Principal provisions: Similar to H.R. 4757, but with some modifications. For example, H.R. 5422 contained an authorization of appropriations; Last action: Referred to Subcommittee on Health and Environment, House Committee on Transportation and Infrastructure. Year introduced: 2001; Bill: H.R. 2694; Principal provisions: Would establish the Department of Environmental Protection and a Bureau of Environmental Statistics within the department to (1) collect, compile, analyze, and publish a comprehensive set of environmental quality and related public health, economic, and statistical data for determining environmental quality and related measures of public health, over both the short and long term, including assessing ambient conditions and trends and the distribution of environmental conditions and related public health conditions; (2) evaluate the adequacy of available statistical measures to determine the department's success in fulfilling statutory requirements; (3) ensure that data and measures referred to in this subsection are accurate, reliable, relevant, and in a form that permits systematic analysis; (4) collect and analyze such other data as may be required to fulfill the bureau's responsibilities and identify new environmental problems; (5) conduct specialized analyses and prepare special reports; and (6) make readily accessible all publicly available data collected; Last action: Referred to the Subcommittee on Energy Policy, Natural Resources and Regulatory Affairs, House Committee on Government Reform. Year introduced: 2003; Bill: H.R. 2138; Principal provisions: Similar to H.R. 2694. In addition, would require the bureau to (1) prepare and submit to Congress and the department an annual report on environmental conditions and public health conditions, using, to the maximum extent practicable, reliable statistical sampling techniques; and (2) make the annual report available to the public upon request, and publish a notice of such availability in the Federal Register. Would also require the statistical procedures and methodology of the Bureau of Environmental Statistics to periodically undergo peer review; Last action: House Committee on Government Reform held hearing. Source: GAO. [End of table] Selected Congressional Hearings Addressing Federal Environmental Data and Indicator Management Issues: Table 8 highlights congressional hearings since 1970 that have addressed one or more salient aspects of the federal environmental information management challenge. As the table indicates, emphasis has shifted over time from creating a data bank centralizing all federal environmental information to the creation of a federal statistical agency that would be responsible for keeping environmental statistical information and establishing data quality standards. Hearings have also frequently examined the critical topic of environmental monitoring. Table 8: Selected Congressional Hearings Addressing Federal Environmental Data and Indicator Management Issues, 1970-2004: Hearing: Environmental Data Bank, 1970; Committee: House Committee on Merchant Marine and Fisheries, Subcommittee on Fisheries and Wildlife Conservation; Related bills: H.R. 17436 H.R. 17779 H.R. 18141; Hearing purpose and description: The purpose of the hearing was to examine a proposed amendment to the National Environmental Policy Act of 1969, which would provide for the establishment of a National Environmental Data Bank. The Data Bank would serve as the central national depository of all information, knowledge, and data relating to the environment, including information, knowledge and data from the head of each department, agency, or instrumentality in the executive branch of the United States government as a result of its operations. Hearing: Environmental Monitoring, 1977; Committee: House Committee on Science and Technology, Subcommittee on the Environment and the Atmosphere; Hearing purpose and description: The purpose of the hearing was to (1) examine the existing monitoring efforts of the federal agencies chiefly responsible for environmental monitoring; and (2) investigate the feasibility and practicality of developing and implementing a prototype monitoring system. The system could eventually be expanded into a comprehensive national or international monitoring program. Hearing: Environmental Monitoring 2, 1978; Committee: House Committee on Science and Technology, Subcommittee on the Environment and the Atmosphere; Related bills: Draft bill; Hearing purpose and description: The purpose of the hearing was to examine a draft bill developed after the 1977 hearings on Environmental Monitoring. The legislation would establish a coordinated, integrative, and cooperative prototype management system of selected, diverse environmental monitoring activities as a possible first step toward a national system to improve the effectiveness and efficiency of environmental monitoring activities. The President would establish and appoint a panel of 10 people, chaired by the Director of the Office of Science and Technology Policy, to develop a prototype monitoring system to demonstrate on a small scale how a national monitoring management system might work. Hearing: National Environmental Monitoring, 1983; Committee: House Committee on Science and Technology, Subcommittee on Natural Resources, Agricultural Research and Environment; Hearing purpose and description: The purpose of the hearing was to explore the condition of the nation's environmental monitoring programs and (1) identify problems in monitoring efforts, and (2) provide recommendations that would lead to improvements in environmental monitoring. Hearing: Environmental Monitoring Improvement Act, 1984; Committee: House Committee on Science and Technology, Subcommittee on Natural Resources, Agricultural Research and Environment; Related bills: Draft bill; Hearing purpose and description: The purpose of the hearing was to examine a draft bill that would create a commission to act as the prime coordinating body for the nation's environmental monitoring efforts. The charge of the commission would be to clearly define the operational changes and the administrative coordination necessary to assure that cost-effective and statistically sound and reliable data are available to support U.S. environmental policy making. Hearing: Establish a Department of Environmental Protection, 1989-1990; Committee: House Committee on Government Operations, Subcommittee on Legislation and National Security; Related bills: H.R. 3847; Hearing purpose and description: The purpose of the hearing was to examine two bills that would elevate EPA to Cabinet status. One of the bills (H.R. 3847) would establish a Bureau of Environmental Statistics, which would make accessible a standardized set of environmental quality data to improve the effectiveness and objectivity of central environmental data collection and analyses so that the President, Congress, and the public can be adequately informed about conditions and trends in environmental quality and so that the department can better evaluate its programs. Hearing: EPA Elevation, 2001-2002; Committee: House Committee on Government Reform, Subcommittee on Energy Policy, Natural Resources and Regulatory Affairs; Related bills: H.R. 2694; Hearing purpose and description: The purpose of the hearing was to examine two bills that would elevate EPA to Cabinet status. One of the bills (H.R. 2694) would establish a Bureau of Environmental Statistics to provide environmental quality and related public health and economic information and analyses to meet the needs of the department and Congress. Source: GAO. [End of table] Selected Academic Reports Addressing Federal Environmental Data and Indicator Management Issues: Table 9 highlights a few of the most significant academic reports analyzing federal environmental information management since 1970. Collectively, these reports clearly indicate that most of the significant information challenges have long been recognized. Our report makes recommendations that, if implemented, would begin to address these long-standing challenges. Table 9: Selected Academic Reports Addressing Federal Environmental Data and Indicator Management Issues: Year: 1970; Name of organization: National Academy of Sciences; Description: Reported that the United States cannot effectively manage the environment without knowing what it is, what it was, and what it can be. Recommended giving the highest priority to developing a centralized comprehensive federal program for monitoring the environment, incorporating environmental quality indices; Citation: National Academy of Sciences, Institutions for Effective Management of the Environment, report (part 1) of the Environmental Study Group to the Environmental Studies Board of the National Academy of Sciences, National Academy of Engineering, (Washington, D.C; January 1970). Year: 1982; Name of organization: The Conservation Foundation; Description: Reported that the nation had made progress in its attack on some conventional environmental problems; however, the information base on which sound environmental policy depends is inadequate and deteriorating. The nation has no monitoring data sufficient to describe accurately the extent or developing seriousness of any environmental problem; Citation: The Conservation Foundation, State of the Environment 1982 (Washington, D.C; 1982). Year: 1988; Name of organization: Paul Portney, Resources for the Future; Description: Recommended the creation of a Bureau of Environmental Statistics because the U.S. does not adequately collect, analyze, and disseminate information about environmental conditions and trends. Environmental data are also not collected in a systematic way to make it useful to interested parties; Citation: Paul Portney, "Reforming Environmental Regulation: Three Modest Proposals," Columbia Journal of Environmental Law, vol. 13 (1988). Year: 1997; Name of organization: National Science and Technology Council; Description: Proposed a conceptual framework for integrating the nation's environmental research and monitoring networks to deliver scientific data and information needed to produce integrated environmental assessments and enhance understanding, evaluation, and forecasting of natural resources; Citation: National Science and Technology Council, Integrating the Nation's Environmental Monitoring and Research Networks and Programs: A Proposed Framework, a report by the Committee on Environment and Natural Resources (Washington, D.C; March 1997). Year: 1998; Name of organization: National Advisory Council for Environmental Policy and Technology; Description: Reported that EPA information systems do not provide sufficient, appropriate, or accurate information to (1) inform decision making, (2) ensure accountability, or (3) document results and achievements. However, the systems have for the most part satisfied regulatory requirements for collecting environmental information; Citation: National Advisory Council for Environmental Policy and Technology, EPA--Managing Information as a Strategic Resource: Final Report and Recommendations of the Information Impacts Committee, EPA 100-R-98-002 (Washington, D.C; January 1998). Year: 1999; Name of organization: National Research Council; Description: Addresses the question of whether the U.S. National Income and Product Accounts should be broadened to include activities involving natural resources and the environment. Concludes that the development of environmental and natural resource accounts is an essential investment for the nation; Citation: National Research Council, Nature's Numbers: Expanding the National Economic Accounts to Include the Environment (Washington, D.C; National Academy Press, 1999). Year: 2000; Name of organization: National Council for Science and the Environment; Description: Reported that the fragmented administrative jurisdictions among federal agencies charged with environmental stewardship compound difficulties in coordinating environmental research and in communicating scientific results to decision makers and the public. Changes in governmental institutions could significantly improve efficiency and communication among scientists and between scientists and decision makers; Citation: National Council for Science and the Environment, Recommendations for Improving the Scientific Basis for Environmental Decisionmaking, report from the first National Conference on Science, Policy and the Environment (Washington, D.C; December 2000). Year: 2002; Name of organization: EPA Science Advisory Board; Description: Reported that many scientists, most decision makers, and nearly all members of the public still have little understanding of the "health" or integrity of the nation's ecological systems. Recommended EPA would benefit from the development of a systematic framework for assessing and reporting on ecological condition by helping assure that required information is measured systematically and provide a template for assembling information across EPA and other agencies; Citation: U.S. Environmental Protection Agency, Science Advisory Board, A Framework for Assessing and Reporting on Ecological Condition: An SAB Report (Washington, D.C; June 2002). Source: GAO. [End of table] [End of section] Appendix V: Environmental Reporting by Private and Public Organizations: Environmental reporting involves the disclosure of information on environmental performance and management practices that convey environmental impacts and the actions being taken to manage these impacts. Some private corporations and public institutions now conduct this type of environmental reporting. For example, some entities report environmental impacts, such as the amount of natural resources used, the amount of waste generated, and the amount of emissions released by a facility. Reports may also include information on the management efforts that are used to influence environmental impacts such as details on how a facility is implementing a pollution reduction program. The 1992 United Nations Conference on Environment and Development has recognized the importance of this type of information, encouraging private facilities to report "annually on their environmental records, as well as on their use of energy and natural resources" and "on the implementation of codes of conduct promoting best environmental practice."[Footnote 27] Corporate reporting of environmental information is becoming increasingly prevalent in the United States and worldwide. A 2002 survey of the Global Fortune Top 250 international companies (GFT 250) found that since 1999, there has been a 29 percent increase in the number of companies that publish separate reports on various aspects of corporate performance in addition to annual financial reports.[Footnote 28] The majority of these separate reports contained environmental information. The United States had the largest number of reporting companies, with 32 of the 105 U.S. companies in the GFT 250 issuing a report--four more companies than reported in 1999. The survey also examined the top 100 companies in each of 19 different countries. The results show that Japan and the United Kingdom have the largest percentage of top 100 companies publishing reports--72 percent and 49 percent, respectively. The United States was third with 36 percent of the top 100 U.S. companies reporting in 2002, an increase from 30 percent in 1999. A separate survey conducted in 2001 found similar increases in reporting as results show that 50 percent of the GFT 100 companies produced environmental reports, up from 44 percent in 1999.[Footnote 29] Corporate reporting of environmental information in the United States is sometimes a regulatory requirement. For example, certain facilities are required to submit information on the manufacture, process, and use of approximately 650 different types of toxic chemicals to the Environmental Protection Agency's Toxic Release Inventory (TRI) database. The Environmental Protection Agency reported that almost 25,000 facilities submitted TRI information in 2001. Another form of mandatory reporting is the disclosure of information relating to environmental issues required in companies' financial filings with the Securities and Exchange Commission. Companies may also voluntarily collect and report environmental information when it is not required because of the benefits that this information provides. The environmental information included in these reports can help corporations communicate the environmental impact of economic activities to a wide variety of stakeholders, such as local and planning authorities, community groups, the media, and the general public. Such communications can potentially benefit the corporation by enhancing its reputation and standing as environmentally responsible. This information also provides corporations benefits by identifying possible cost savings in both the resources used and operating costs and by identifying potential environmental risks, allowing corporations to better anticipate potential problems and avoid negative publicity on environmental issues. For example, this information can direct a corporation's attention to ways to change resource use that results in efficiency savings from lower energy, water, and material costs. Reporting of standardized information is important in order to examine the progress of a facility over time and compare or aggregate information for many different facilities. Consequently, private and public facilities are adopting voluntary standards and guidelines for environmental reporting. A recent survey of multinational corporations identified some of the most influential voluntary standards now being used by corporations to standardize environmental information.[Footnote 30] Included on this list are the International Organization for Standardization 14000 standards, the Global Reporting Initiative, the World Business Council for Sustainable Development, the United Nation's Global Compact, and the Organization for Economic Cooperation and Development's Guidelines for Multinational Enterprises. Verification of the quality of the information contained in voluntary reports is also important to assuage the inherent tension between a facility's desire to present its side of the story and a stakeholder's demand for greater transparency. Just as investors look to independent audits to certify the accuracy and completeness of financial reporting, stakeholders seek such assurances for the information contained in environmental reports. Even so, according to a 2002 study, only 3 percent of those top 100 U.S. companies that reported information had their reports verified by third parties.[Footnote 31] Environmental reporting is an important consideration for public governmental facilities as well. Executive Order 13148 calls for federal agencies to implement environmental management systems by December 31, 2005, at all appropriate agency facilities. The executive order states that these environmental management systems shall include measurable environmental goals, objectives, and targets to be reviewed and updated annually. According to the Office of the Federal Environmental Executive, more than 180 federal facilities have already developed and are implementing environmental management systems to ensure compliance with environmental requirements and integrate environmental accountability into decision making and planning. It also reports that, as of December 2002, hundreds of other facilities had initiated the education process needed to ensure commitment to the development of environmental management systems. Whether the basis for environmental reporting is mandatory or voluntary, environmental reports contain information that can be used by a variety of stakeholders to monitor environmental impacts and inform decision making. For example, this information can inform community leaders and residents in local communities of environmental hazards, show how facilities are addressing specific environmental concerns, and provide an opportunity for the community to identify how a local facility is performing relative to other similar facilities. Employees can also use this information to understand a facility's existing occupational risks. In addition, information that identifies the environmental impacts associated with a product or service throughout its lifecycle can be of interest to customers and consumers and help inform the choices they make. Reporting can also yield information on a facility's environmental vision, environmental performance, future environmental plans, and environmental risks and liability. These issues may interest potential business partners, investors, insurers, and lenders. Finally, this information can further the understanding of government policy analysts regarding current environmental circumstances and inform government decisions on how best to achieve specific environmental objectives. [End of section] Appendix VI: Accounting for the Environment: Environmental accounts can be used to develop indicators that examine the nexus between the environment and the economy. As a result, the development of environmental accounts is widely recognized as important. However, the United States currently has no federal effort to develop comprehensive environmental accounts. Accounts Yield Indicators with Beneficial Uses: Environmental accounts provide a framework that is used to link environmental information to the information that is contained in the national economic accounts. Combining this information allows environmental and economic issues to be examined jointly. For example, by linking information on the amount of pollution released during a manufacturing process with knowledge of the amount of economic output derived through that manufacturing, policymakers could better understand how a change in regulations, such as on pollution limits, might affect the ensuing economic performance of an industry. Several federal agencies are responsible for managing and protecting the nation's environment and have developed strategic plans that highlight the importance of the interaction between the environment and the economy. For example, the strategic plan of the Environmental Protection Agency identifies procedures to ensure sound analysis of the economic effects of its environmental regulations, policies, and programs. The Department of the Interior's plan sets an objective of managing natural resources in a way that promotes responsible use while sustaining a dynamic economy. The Department of Agriculture's plan identifies the need to manage forests and rangelands that are resilient to natural and human disturbance while also managing for economic uses such as oil, natural gas, and timber. Finally, the National Oceanic and Atmospheric Administration's plan seeks to achieve a balance between the protection and use of the ocean's resources to ensure sustainability while also achieving an optimal contribution to the nation's economy. Environmental accounts provide information that is useful in creating indicators to examine the interaction of the environment and the economy. The following are examples of these potential indicators. * Policymakers could use efficiency indicators to determine the volume of waste created in production processes and allow for comparison with the resources used in production and the total economic output. Policymakers could use these indicators to measure and track the use of resources and to determine how best to improve the efficiency of resource use and minimize waste generation while considering the potential economic effects of such policies. * Policymakers could use resource management indicators to determine the amounts of unharvested natural resources still available for future consumption. This information could provide policymakers with a better understanding of the rate of current resource use, allow for more effective long-term management of natural resources and help policymakers understand the potential economic effects resulting from changes in resources use. * Policymakers could use environmental expenditure indicators to manage and track the amount of economic resources being devoted to abating pollution. Such indicators would allow policymakers to identify where resources are being spent to reduce pollution, evaluate the effectiveness of the nation's efforts, and determine the economic impacts on the economy resulting from the costs of abating pollution. Importance of Environmental Accounts Recognized around the World: There are several efforts under way to develop environmental accounts by governments and nongovernmental organizations. A recent report identifies 19 countries that are developing some type of environmental accounts in their statistical offices or other government ministries.[Footnote 32] Also, the United Nations, along with other international organizations, has developed guidelines to be used by both national and international agencies for compiling environmental accounts.[Footnote 33] Canada and the Netherlands are currently developing environmental accounts alongside national economic accounts to inform policymakers in these countries. First published in 1997, the Canadian System of Environmental and Resource Accounts (CSERA) provides a comprehensive framework for understanding the environment and the economy by supplementing environmental information alongside information in the national economic accounts.[Footnote 34] According to Statistics Canada, while CSERA is a work in progress, information in the accounts has improved policymakers' knowledge of interactions between the environment and the economy in Canada. Statistics Netherlands has published a National Accounting Matrix including Environmental Accounts (NAMEA) for the years 1987 through 1992 and continues to further develop the accounts. NAMEA functions as an instrument for a variety of analyses, including the identification of the economic and environmental effects of consumption of certain products and the consequences of regulating energy use on environmental themes like greenhouse gases and economic issues, such as national income. The World Bank and World Resources Institute have developed their own environmental accounts. The World Bank has developed a measure of net savings that calculates a nation's overall savings rate by including the value of a nation's natural resources along with traditional economic factors. The World Bank currently updates this measure annually for approximately 50 counties. This measure of adjusted net savings can be used to compare and contrast the traditional economic measures of savings in order to monitor the potential impacts of natural resource use. The World Resources Institute has created material flow accounts for several industrialized countries. These accounts track the physical flows of natural resources as they move through the economy, including extraction, production, fabrication, use, recycling, and final disposal. According to a World Resources Institute official, a goal of these accounts is to demonstrate to government agencies the value of this environmental and economic information for formulating public policy. Finally, the National Research Council of the National Academy of Sciences has reported that environmental accounts can provide policymakers with information that would improve decision making resulting in substantial monetary benefit for the United States. The nation currently invests a substantial amount of money in pollution control to clean the air, water, and land, and environmental accounts could provide the information necessary to help identify how regulations may be refined, so that expenditures on pollution control would be allocated more efficiently. For example, the National Research Council estimates that improvements in regulations resulting in a 10 percent reduction in pollution control expenditures would save the nation more than $10 billion per year. The United States Has No Plans to Develop Federal Environmental Accounts: In the United States, no federal effort to create comprehensive environmental accounts is either under way or planned. In 1992, the Bureau of Economic Analysis (BEA), within the Department of Commerce, began developing a set of comprehensive accounts called the Integrated Economic and Environmental Satellite Accounts (IEESA). BEA created prototype accounts for the mineral resources sector and planned to continue its IEESA work and develop accounts for other sectors, but in 1995, a committee report accompanying the Department of Commerce's fiscal year 1995 appropriation directed BEA to suspend this effort and allow for an independent review of the IEESA. The National Academy of Sciences' National Research Council conducted this review and released its final report in 1999,[Footnote 35] recommending that Congress authorize and fund BEA to recommence its work developing the IEESA. However, Congressional appropriations committees up until fiscal year 2002 directed BEA not to pursue the IEESA initiative. Although this restriction has now been lifted, to date no funding has been appropriated and BEA currently has no plans to continue with its work. [End of section] Appendix VII: The Uncertain Cost of Environmental Information: The collection and provision of federal environmental data and statistics are costly, but it is uncertain how much the federal government spends each year on these activities. While there are no agreed-upon sources of the costs to the federal government of environmental information, there are two frequently cited sources that, despite known shortcomings, represent the best available federal estimates of such costs. In July 1995, the National Science and Technology Council (NSTC) convened a team of federal scientists and program managers to develop a national framework for integrating and coordinating environmental monitoring and related research by amalgamating and building upon existing networks and programs. In 1997, the team's final report, Integrating the Nation's Environmental Monitoring and Research Networks and Programs, reported that the federal government spent about $650 million on about 31 major federal environmental monitoring and research programs and networks in fiscal year 1995.[Footnote 36] The team arrived at this total by combining the amounts that agencies reported to the team on a project-by-project basis. The total was not disaggregated in NSTC's final report, and the effort has not been updated. Additionally, the Office of Management and Budget (OMB) annually reports actual and estimated funding for major federal statistical programs[Footnote 37] in Statistical Programs of the United States Government, as required by the Paperwork Reduction Act.[Footnote 38] Major statistical programs differ in organizational structure and in the means through which they are funded. A particular agency may carry out some major statistical programs on its own. For example, according to OMB the sole mission of the Energy Information Administration, within the Department of Energy, is to develop energy statistics. Other agencies have statistical programs that are an outgrowth of their administrative responsibilities or that support their program planning and evaluation functions. In these cases, the budget for statistical activities comprises a portion of an agency's total appropriations, including an allocation of the salaries and operating expenses for the statistical program. Funding for statistical activities may increase or decrease as a result of the cyclical nature of surveys. Such increases or decreases should not be interpreted as changes in agency priorities, but rather as the normal and expected consequences of the nature of the programs. Agencies may also experience increases or decreases in their budgets when they conduct one-time surveys or studies in a particular fiscal year. Additionally, a statistical program may not always be executed by the agency that sponsors it. In these instances, the work is done on a reimbursable basis by another federal agency or by a state or local government or a private organization under contract. OMB's reported totals reflect statistical activities in support of the agency's mission, whether the activities are performed by the agency or by contract. OMB divides federal statistical activities into four categories: Health and Safety; Social and Demographic; Natural Resources, Energy, and Environment; and Economic. Table 10 provides the direct funding levels that Congress appropriated for fiscal years 1998 through 2002 for statistical activities in the Natural Resources, Energy, and Environment category. Table 10: Direct Funding for Major Environment, Energy, and Natural Resources Statistical Programs: Millions of dollars; Agency: Forest Service; Fiscal year 1998: 19; Fiscal year 1999: 14; Fiscal year 2000: 23; Fiscal year 2001: 29; Fiscal year 2002: 29. Agency: Natural Resources Conservation Service; Fiscal year 1998: 107; Fiscal year 1999: 107; Fiscal year 2000: 108; Fiscal year 2001: 113; Fiscal year 2002: 111. Agency: NOAA; Fiscal year 1998: 49; Fiscal year 1999: 53; Fiscal year 2000: 54; Fiscal year 2001: 87; Fiscal year 2002: 87. Agency: Office of Environment, Safety, and Health; Fiscal year 1998: 24; Fiscal year 1999: 24; Fiscal year 2000: 24; Fiscal year 2001: 33; Fiscal year 2002: 34. Agency: Energy Information Administration; Fiscal year 1998: 66; Fiscal year 1999: 70; Fiscal year 2000: 72; Fiscal year 2001: 78; Fiscal year 2002: 78. Agency: National Institute on Environmental Health Sciences; Fiscal year 1998: 26; Fiscal year 1999: 30; Fiscal year 2000: 39; Fiscal year 2001: 56; Fiscal year 2002: 65. Agency: Fish and Wildlife Service; Fiscal year 1998: 6; Fiscal year 1999: 3; Fiscal year 2000: 4; Fiscal year 2001: 9; Fiscal year 2002: 9. Agency: Minerals Management Service; Fiscal year 1998: 2; Fiscal year 1999: 2; Fiscal year 2000: 3; Fiscal year 2001: 3; Fiscal year 2002: 4. Agency: National Park Service; Fiscal year 1998: 2; Fiscal year 1999: 2; Fiscal year 2000: 2; Fiscal year 2001: 2; Fiscal year 2002: 1. Agency: Bureau of Reclamation; Fiscal year 1998: 2; Fiscal year 1999: 3; Fiscal year 2000: 3; Fiscal year 2001: 3; Fiscal year 2002: 4. Agency: U.S. Geological Survey; Fiscal year 1998: 64; Fiscal year 1999: 60; Fiscal year 2000: 73; Fiscal year 2001: 83; Fiscal year 2002: 84. Agency: Environmental Protection Agency; Fiscal year 1998: 144; Fiscal year 1999: 192; Fiscal year 2000: 202; Fiscal year 2001: 174; Fiscal year 2002: 148. Agency: National Aeronautics and Space Administration; Fiscal year 1998: 17; Fiscal year 1999: 17; Fiscal year 2000: 17; Fiscal year 2001: 17; Fiscal year 2002: 17. Total for major environment, energy, and natural resources statistical programs; Fiscal year 1998: 528; Fiscal year 1999: 577; Fiscal year 2000: 624; Fiscal year 2001: 687; Fiscal year 2002: 671. Total for all federal statistical activities; Fiscal year 1998: 3,205; Fiscal year 1999: 4,167; Fiscal year 2000: 7,755; Fiscal year 2001: 4,179; Fiscal year 2002: 4,212. Source: OMB. Note: Totals reflect actual appropriations. [End of table] It is important to note that the totals produced through these efforts are not necessarily representative of the magnitude of federal investment in environmental information--the total produced by NSTC and the figures produced annually by OMB both likely have significant omissions. Moreover, the totals produced by these efforts do not necessarily cover similar activities, although there is likely significant overlap. OMB's classification includes issues (such as energy) and activities (such as statistical consulting or training) that were not necessarily included in the NSTC's calculations. GAO was not able to compare the various programs and subprogram activities that constitute the totals produced by these efforts. Reconciling the methodologies used by NSTC and OMB to produce these totals is beyond the scope of GAO's report. In preparing this report, GAO used the estimate reported by NSTC to generally reflect the annual cost to the federal government of collecting environmental information--at least $600 million. However, this figure provides a limited snapshot of all spending related to collecting and maintaining information on the environment. Agency officials and other experts noted that the actual annual costs of environmental information to the federal government through monitoring, research, statistical, data management, and other activities will remain uncertain until a comprehensive assessment is performed that examines the completeness, overlap, gaps, and quality of the existing programs that produce environmental information. [End of section] Appendix VIII: Selected Options: Experts who participated in the environmental indicator set meeting jointly convened by GAO and the National Academy of Sciences identified a number of short-term alternatives to assist environmental indicator set developers and users. These options were not independently evaluated by GAO and are presented in no specific order. Appearance in this appendix does not constitute an endorsement of the ideas. * Congress should reinstate Section 201 of the National Environmental Policy Act (NEPA), requiring the Council for Environmental Quality (CEQ) to submit an annual report to the Congress on the environment. * The Office of Management and Budget should hold a hearing to receive feedback from agencies on the Program Assessment Rating Tool. * Congress should charge GAO or the Congressional Research Service with an annual review of environmental indicators, their adequacy, and utility. * Federal agencies should pursue an executive order that would establish an interagency work group to deal with environmental information and data, specifically regarding the development of environmental indicators. One expert suggested using Executive Order 13112 (National Invasive Species Council) as a model. * Congress should commission a study by an independent expert organization, such as the National Academy for Public Administration or the National Academy of Sciences, to review appropriate institutional structures for housing an entity to coordinate the production of environmental information. * Congress should charge an entity with starting the process of coordinating environmental information and developing and compiling existing and past environmental indicator efforts. * Congress should consider acting upon the recommendations presented by the National Science and Technology Council's 1997 report Integrating the Nation's Environmental Monitoring and Research Networks and Programs: A Proposed Framework. * Congress should task an agency with creating a fully searchable Internet clearinghouse to distribute information about developing and using environmental indicator sets, including links to related environmental data. Portal developers should ensure linked data are compliant with current Federal Geographic Data Committee standards. * Congress should continue to support ongoing federal partnerships promoting integration of environmental data and interagency work on developing standards to ensure data interoperability. [End of section] Appendix IX: Comments from the Council on Environmental Quality: EXECUTIVE OFFICE OF THE PRESIDENT: COUNCIL ON ENVIRONMENTAL QUALITY: WASHINGTON, D.C. 20503: CHAIRMAN: November 1, 2004: Mr. John B. Stephenson: Director, Natural Resources and Environment: U.S. General Accountability Office: 441 G Street, N.W.: Washington, D.C. 20548: Dear Mr. Stephenson: Thank you for the opportunity to review the proposed report entitled Environmental Indicators: Better Coordination Is Needed to Develop Environmental Indicator Sets that Inform Decisions (GAO-05-52). The proposed report is a very timely and comprehensive review of the many efforts underway to improve the reporting of indicators measuring environmental conditions. Your report properly documents the many advancements and challenges that experts in development of environmental indicators recognize. As your proposed report describes, the Council on Environmental Quality (CEQ) has long been recognized by Congress and others as an appropriate institution to lead such efforts. Consistent with the sections of NEPA that are cited in the report on p. 36, Congress directed CEQ to, "review the adequacy of existing systems for monitoring and predicting environmental changes in order to achieve effective coverage and efficient use of research facilities and other resources", and to assist and advise the President by "collecting, collating, analyzing, and interpreting data and information on environmental quality, ecological research and evaluation." 42 U.S.C. 4372(d))(3)(7). Further, CEQ has been directed by Executive Order to "(e) Promote the development and use of indices and monitoring systems (1) to assess environmental conditions and trends, (2) to predict the environmental impact of proposed public and private actions, and (3) to determine the effectiveness of programs for protecting and enhancing environmental quality." Executive Order 11514, as amended by Executive Order 11991, May 24, 1977. Several years ago, CEQ established an Interagency Working Group on Indicator Coordination. The Working Group developed a Framework for a National System of Indicators on Natural and Environmental Resources and has studied the current institutional arrangements for performing the various functions needed for development and operation of such a system. It has been drawing upon the work of a number of key "building block" projects that are selecting and identifying indicators for managing natural and environmental resources. These include the EPA Draft Report on the Environment, the Heinz Center's State of the Nation's Ecosystems Report, and the work of four roundtables that are using collaborative processes to develop criteria and indicators for sustainable management of natural resources, including forests, rangelands, minerals, and water resources. Further, four agencies with significant responsibility for monitoring and managing - the Departments of Agriculture, Commerce, Interior, and the Environmental Protection Agency - have begun studies of ways to improve their institutional arrangements for statistical reporting of the indicators that would be included in a comprehensive national system. While your proposed report recognizes these steps, it suggests that a quasi-governmental agency also be studied. This approach may have merit, particularly if such an organization is created for reporting on a broader set of key national indicators. However, improvements in coordination and statistical reporting within the Federal government will be important irrespective of whether a quasi-governmental agency is formed. Gains can be achieved by moving ahead with improvements within Federal agencies, even while studying iterative organizational options. One shortcoming in the proposed report is that its main focus is on the status of efforts in the Environmental Protection Agency (EPA). Other agencies have equally important missions for managing natural resources and the environment and collecting data on their conditions. For example, the Department of the Interior has responsibility for managing public lands, National Parks and National Wildlife Refuges, and for monitoring water resources; the Department of Agriculture has responsibility for monitoring forest land, managing the National Forests and implementing billions of dollars in conservation programs on agricultural land; and the National Oceans and Atmospheric Administration has responsibility for oceans, fisheries and marine sanctuaries.. Their contributions to the development and operation of a comprehensive system of indicators and their use of indicators in planning and management are essential to the effectiveness of the Federal government's environmental programs. Other agencies, such as NASA, National Science Foundation, the Center for Disease Control, OSHA, Department of Defense and Department of Energy also have a direct or indirect role in this issue. The GAO report should more clearly recognize that a comprehensive set of environmental indicators has the potential for benefiting environmental management across all Federal agencies. On pages 7, 22 and 30, the proposed report identifies two closely related challenges to effective indicators: linking management actions to environmental conditions and integrating various indicators to better understand the environment. On page 37, the proposed report mentions the work of the Integration and Synthesis Group, an important effort to address these challenges. Over the past year, the group has made significant progress in the development of a systems-based conceptual framework designed to facilitate the selection, interpretation and reporting of indicators based on a better understanding of the linkages among management actions, social and economic processes and environmental processes. Participants in the Integration and Synthesis Group hope that the conceptual framework can be used as the basis for promoting greater integration among the various indicator projects and, as a basis for developing of a comprehensive national system of indicators that draws upon such projects. I also recommend that the report take note of the Program Assessment Rating Tool (PART) recently developed by the Office of Management and Budget to improve program performance. Evaluation of federal agencies' indicator work through PART will enable both the executive and legislative branch of government to better understand program performance and identify opportunities for improvement. Finally, any indicator system is only as strong as the relevance, scope and accuracy of the data that informs it. Your proposed report currently does not discuss the recent advancement, led by the federal government, of the Global Earth Observation System of Systems (GEOSS) which will include an Integrated Ocean Observing System. The Administration pressed for and obtained worldwide acknowledgement of the need for such a system in the Johannesburg Plan of Action that was produced at the World Summit on Sustainable Development. The Administration then obtained a concrete commitment to action from our G-8 partners. Two subsequent summits have lead to a formal partnership that continues. A detailed description can be found at http:// earthobservations.org/ I suggest that your report include a discussion of this effort. In closing, I would like to emphasize the CEQ's commitment to improving our capacity to promote coordination and integration of information on environmental and natural resource conditions in the United States. The proposed GAO report documents many of the challenges that this effort involves and provides thoughtful recommendations for addressing them. I look forward to the broader discussion that I know your report will stimulate. Yours Sincerely, Signed by: James L. Connaughton: [End of section] Appendix X: Comments from the Department of the Interior: United States Department of the Interior: OFFICE OF THE ASSISTANT SECRETARY POLICY, MANAGEMENT AND BUDGET: Washington, D.C. 20240: OCT 25 2004: Mr. Ed Kratzer: Assistant Director: Natural Resources and Environment Division: U.S. Government Accountability Office: 441 G Street, N. W. Washington, D.C. 20548: Dear Mr. Kratzer: Thank you for providing the Department of the Interior the opportunity to review and comment on the Draft U. S. Government Accountability Office report entitled "Environmental Indicators: Better Coordination Is Needed to Develop Environmental Indicator Sets that Inform Decisions" (GAO-05-52). In general, we agree with the findings and recommendations in the report, including the recommendation that the Chair of the Council on Environmental Quality (CEQ) direct the Interagency Working Group on Indicator Coordination to study options for performing the functions necessary to guide the development, coordination, and integration of environmental indicator sets. This is essential to assure full consideration of the range of agency programs that are noted in the report, including their unique characteristics and complex interrelationships. The enclosure provides specific comments from the Minerals Management Service and the National Park Service. We hope our comments will assist you in preparing the final report. Sincerely, Signed for: P. Lynn Scarlett: Assistant Secretary: Policy, Management and Budget: Enclosure: [End of section] Appendix XI: GAO Contact and Staff Acknowledgments: GAO Contact: Ed Kratzer, (202) 512-6553: Staff Acknowledgments: Key contributions to this report were made by Nancy Bowser, Chase Huntley, Richard Johnson, Kerry Lipsitz, Jonathan McMurray, Mark Metcalfe, and Nathan Morris. Also contributing to this report were Jonathan Dent, Evan Gilman, Scott Heacock, R. Denton Herring, Kim Raheb, and Greg Wilmoth. [End of section] Bibliography: [End of section] Bakkes, J.A., G.J. van den Born, J.C. Helder, R.J. Swart, C.W. Hope, and J.D.E. Parker. An Overview of Environmental Indicators: State of the Art and Perspectives. Report commissioned by the United Nations Environment Programme. June 1994. Bernard, James. State of the Science Ecological Indicators Report. Unpublished draft prepared for the U.S. Environmental Protection Agency. April 2002. Bureau of Economic Analysis. "Integrated Economic and Environmental Satellite Accounts." Survey of Current Business. Washington, D.C; April 1994: 33-49. Carter, J. "Environmental Priorities and Programs--a Message to Congress." Weekly Compilation of Presidential Documents, vol. 15, no. 26 (July 2, 1979): 1353-1373. Clarke, David P. "The State of the Nation's Ecosystems: Measuring the Lands, Waters, and Living Resources of the United States." Environment, vol. 45, no. 5 (2003): 40-43. Department of Public Policy, University of North Carolina at Chapel Hill. Environmental Management Systems: Do They Improve Performance? Project final report: Executive summary prepared for the U.S. Environmental Protection Agency. January 2003. Duncan, Joseph W., and Andrew C. Gross. Statistics for the 21st Century: Proposals for Improving Statistics for Better Decision Making. Chicago: Irwin, 1995. Energy, Environment, and Resources Center, University of Tennessee. Expertise and the Policy Cycle, by Jack Barkenbus. Knoxville, Tennessee: September 1998. Environment Canada. Environmental Indicators and State of the Reporting in Canada--Part 1: Current Trends, Status, and Perceptions. Draft background paper to a national environmental indicator and state of the environment reporting strategy; Washington, D.C; August 2003. Environment Canada. Canadian Information System for the Environment: Sharing Environmental Decisions. Final report by the Task Force on a Canadian Information System for the Environment. Cat. No. EN21-207/ 2001E. Hull, Quebec; October 2001. Environment Canada. A National Strategy for Environmental Indicators and State of the Environment Reporting in Canada: Proposed Options. Draft report. Hull, Quebec; August 2003. Environmental Council of States and U.S. Environmental Protection Agency Information Management Workgroup. National Environmental Information Exchange Network: Information Package. Washington, D.C; June 2001. Flynn, Patrice, David Berry, and Theodore Heintz. "Sustainability and Quality of Life Indicators: Toward the Integration of Economic, Social and Environmental Measures." Indicators: The Journal of Social Health, vol. 1, no. 4 (fall 2002): 1-21. Harwell, Mark A., Victoria Myers, Terry Young, Ann Bartuska, Nancy Gassman, John H. Gentile, Christine C. Harwell, Stuart Appelbaum, John Barko, Billy Causey, Christine Johnson, Agnes McLean, Ron Smola, Paul Templet, and Stephen Tosini. "A Framework for an Ecosystem Integrity Report Card: Examples from South Florida Show How an Ecosystem Report Card Links Societal Values and Scientific Information." BioScience, vol. 49, no. 7 (July 1999): 543-556. Innes, Judith E. Knowledge and Public Policy: The Search for Meaningful Indicators, 2nd ed. New Brunswick, New Jersey: Transaction Publishers, 2002. International Council for Science. Making Science for Sustainable Development More Policy Relevant: New Tools for Analysis. ICSU Series on Science for Sustainable Development No. 8. Paris, 2002. International Institute for Sustainable Development. "Review Paper on Selected Capital Based Sustainable Development Indicator Frameworks." Study for the Steering Committee of the National Roundtable on Environment and the Economy (Canada). Winnipeg, Manitoba; December 2000. National Academy of Public Administration. Evaluating Environmental Progress: How EPA and the States Can Improve the Quality of Enforcement and Compliance Information. Report for the U.S. Environmental Protection Agency. June 2001. National Academy of Public Administration. Setting Priorities, Getting Results: A New Direction for EPA. Report to the U.S. Congress. Silver Spring, Maryland: Sherwood Fletcher Associates, 1995. National Council for Science and the Environment. Recommendations for Improving the Scientific Basis for Environmental Decisionmaking. Report of the first National Conference on Science, Policy, and the Environment. Washington, D.C; December 2000. National Research Council. Ecological Indicators for the Nation. Washington, D.C.: National Academy Press, 2000. National Research Council. Our Common Journey: A Transition toward Sustainability. Washington, D.C.: National Academy Press, 1999. National Research Council. Nature's Numbers: Expanding the National Economic Accounts to Include the Environment. Washington, D.C.: National Academy Press, 1999. National Science and Technology Council. Integrating the Nation's Environmental Monitoring and Research Networks and Programs: A Proposed Framework. Report by the Committee on Environment and Natural Resources. Washington, D.C; March 1997. O'Malley, Robin, Kent Cavender-Bares, and William C. Clark. "Providing 'Better' Data: Not as Simple as It Might Seem." Environment, vol. 45, no. 4 (May 2003): 8-18. Riche, Martha Farnsworth. The United States of America: Developing Key National Indicators. Paper presented at the Forum on Key National Indicators; Washington, D.C; February 2003. Sustainability Institute. Indicators and Information Systems for Sustainable Development, by Donella Meadows. Report to the Balaton Group. Hartland, Vermont; September 1998. Tonn, Bruce, and Robert Turner. Environmental Decision Making and Information Technology: Issues Assessment. NCEDR/99-01. National Center for Environmental Decision-Making Research. Knoxville, Tennessee; May 1999. Train, Russell. "The Quest for Environmental Indices." Science 178, no. 4057 (October 1972): 1. United Nations, Integrated Environmental and Economic Accounting 2003: Handbook of National Accounting. ST/ESA/STAT/SER.F/61/Rev.1. (1992). U.S. Environmental Protection Agency. A Framework for Assessing and Reporting on Ecological Condition: An SAB Report. EPA SAB-EPEC-02-009. Science Advisory Board. June 2002. U.S. Environmental Protection Agency. Environmental Statistics and Information Division, Office of Policy, Planning, and Evaluation. A Conceptual Framework to Support Development and Use of Environmental Information in Decision-Making. EPA 239-R-95-012, Washington, D.C; April 1995. U.S. Environmental Protection Agency. National Advisory Council for Environmental Policy and Technology. Managing Information as a Strategic Resource: Final Report and Recommendations of the Information Impacts Committee. EPA 100-R-98-002. Washington, D.C; January 1998. World Bank. Policy Applications of Environmental Accounting, by G. Lange. Environmental Economic Series, Paper No. 88. New York, January 2003. World Conservation Union. Lessons Learned from Environmental Accounting: Findings from Nine Case Studies, by Joy Hecht. Washington, D.C; October 2000. World Resources Institute. Environmental Indicators: A Systematic Approach to Measuring and Reporting on Environmental Policy Performance in the Context of Sustainable Development, by Allen Hammond, Albert Adriaanse, Eric Rodenburg, Dirk Bryant, and Richard Woodward. Washington, D.C; May 1995. [End of section] Related GAO Products: Informing Our Nation: Improving How to Understand and Assess the USA's Position and Progress. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-05-01] Washington, D.C.: November 10, 2004. Watershed Management: Better Coordination of Data Collection Efforts Needed to Support Key Decisions. [Hyperlink, http://www.gao.gov/cgi- bin/getrpt?GAO-04-382] Washington, D.C.: June 7, 2004. Geographic Information Systems: Challenges to Effective Data Sharing. GAO-03-874T. Washington, D.C.: June 10, 2003. Forum on Key National Indicators: Assessing the Nation's Position and Progress. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-03- 672SP] Washington, D.C.: May 1, 2003. Great Lakes: An Overall Strategy and Indicators for Measuring Progress Are Needed to Better Achieve Restoration Goals. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-03-515] Washington, D.C.: April 30, 2003. Environmental Protection: Observations on Elevating the Environmental Protection Agency to Cabinet Status. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-552T] Washington, D.C.: March 21, 2002. Environmental Information: EPA Needs Better Information to Manage Risks and Measure Results. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-01-97T] Washington, D.C.: October 3, 2000. Environmental Information: EPA Is Taking Steps to Improve Information Management, but Challenges Remain. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO/RCED-99-261] Washington, D.C.: September 17, 1999. Environmental Protection: EPA's Problems with Collection and Management of Scientific Data and Its Efforts to Address Them. GAO/T-RCED-95-174. Washington, D.C.: May 12, 1995. Environmental Protection: Meeting Public Expectations with Limited Resources. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO/RCED- 91-97] Washington, D.C.: June 18, 1991. Environmental Protection Agency: Protecting Human Health and the Environment through Improved Management. [Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO/RCED-88-101] Washington, D.C.: August 16, 1988. (360343): FOOTNOTES [1] It is difficult to determine exactly what the federal government spends each year on environmental information. Although the Office of Management and Budget annually publishes funding for natural resource, energy, and environmental statistics in Statistical Programs of the United States Government, we were not able to disaggregate the totals by program. Moreover, the National Science and Technology Council reported in 1997 that, in fiscal year 1995, the federal government spent about $650 million on environmental research and monitoring networks and programs, but that assessment has not been updated. See appendix VII for more information. [2] In addition, there are several governmentwide requirements that affect environmental data management. For example, the Information Quality Act requires the Office of Management and Budget to provide guidance to federal agencies for maximizing the quality of information they disseminate. [3] Many organizations in the United States are developing comprehensive key indicator systems--organized, systematic efforts to produce selected economic, social, and environmental indicators--to assess position and progress toward specific goals. See GAO, Informing Our Nation: Improving How to Understand and Assess the USA's Position and Progress, GAO-05-01 (Washington, D.C.: Nov. 10, 2004). [4] National Science and Technology Council, Committee on Environment and Natural Resources, Integrating the Nation's Environmental Monitoring and Research Networks and Programs: A Proposed Framework (Washington, D.C; March 1997). [5] National Council for Science and the Environment, Improving the Scientific Basis for Decisionmaking: A Report from the first National Conference on Science, Policy, and the Environment (Washington, D.C; December 2000). [6] Effective May 15, 2000, the Federal Reports Elimination and Sunset Act (Pub. L. No. 104-66, 3003) terminated the CEQ reporting requirement that had appeared in the National Environmental Policy Act. [7] See appendix I for a more thorough description of our survey methodology and its limitations. [8] P.L. 105-391 (1998). [9] Note that GAO did not attempt to independently evaluate the costs, benefits, or risks of developing and using indicator sets that accrue from the positive impacts reported by indicator set users. [10] The Federal Water Pollution Control Act Amendments of 1972--which, as amended, is commonly known as Clean Water Act--requires EPA to compile states' biennial reports on the quality of their waters into the National Water Quality Inventory. See 33 U.S.C.A. 1315(b). [11] For a recent example, see Jacksonville Community Council Inc., Making Jacksonville a Clean City (Jacksonville, Florida; spring 2002). [12] GAO, Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results, GAO-04-38 (Washington, D.C.: Mar. 10, 2004). [13] GAO, Major Management Challenges and Program Risks: Environmental Protection Agency, GAO-01-257 (Washington, D.C.: Jan. 1, 2001). [14] See GAO, Wetlands Overview: Problems with Acreage Data Persist, GAO/RCED-98-150 (Washington, D.C.: July 1998); and Results-Oriented Management: Agency Crosscutting Actions and Plans in Border Control, Flood Mitigation and Insurance, Wetlands, and Wildland Fire Management, GAO-03-321 (Washington, D.C.: Dec. 20, 2002). [15] GAO, Watershed Management: Better Coordination of Data Collection Efforts Needed to Support Key Decisions, GAO-04-382 (Washington, D.C.: June 7, 2004). [16] Council on Environmental Quality, Environmental Quality: The First Annual Report of the Council on Environmental Quality (Washington, D.C; 1970). [17] William Clark, Thomas Jorling, and William Merrell, "Foreword," Designing a Report on the State of the Nation's Ecosystems (Washington, D.C.: H. John Heinz III Center for Economics and the Environment, 1999). [18] An official from the Heinz Center reported that efforts have been made to enhance the likelihood that future reports will be able to quantify this metric. [19] 42 U.S.C. 4321. [20] These indictor sets include those developed by the Roundtable on Sustainable Forests, Sustainable Rangelands Roundtable, Sustainable Water Resources Roundtable, Sustainable Minerals Roundtable, EPA's Environmental Indicators Initiative, and the Heinz Center's State of the Nation's Ecosystems project. [21] GAO, The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven, GAO/GGD-97-109 (Washington, D.C.: June 2, 1997). [22] GAO, Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results, GAO-04-38 (Washington, D.C.: Mar. 10, 2004). [23] GAO, Major Management Challenges and Program Risks: Environmental Protection Agency, GAO-01-257 (Washington, D.C.: Jan. 1, 2001). [24] EPA is not the only agency to struggle with this issue. See GAO, Major Management Challenges and Program Risks: Department of the Interior, GAO-03-104 (Washington, D.C.: Jan. 1, 2003); and Department of Agriculture: Status of Achieving Key Outcomes and Addressing Major Management Challenges, GAO-01-761 (Washington, D.C.: Aug. 23, 2001). [25] As we noted in our January 2003 report on EPA's major management challenges and program risks, the indicators initiative has the potential to make a substantial contribution to measuring EPA's progress within an overall framework of ecological and human health, assisting EPA's strategic planning efforts, and facilitating a transition to performance-based management driven by environmental goals. See GAO, Major Management Challenges and Program Risks: Environmental Protection Agency, GAO-03-112 (Washington, D.C.: Jan. 1, 2003). [26] GAO-03-112, 4. [27] United Nations, United Nations Conference on Environment and Development, Agenda 21 (1992). [28] KPMG, KPMG Global Sustainability Services, KPMG International Survey of Corporate Sustainability Reporting 2002 (De Meern, The Netherlands; June 2002). [29] Corporate Social Responsibility Network, The State of Global Environmental and Social Reporting, (Shrewsbury, United Kingdom; 2001). [30] World Bank, International Finance Corporation, Race to the Top: Attracting and Enabling Global Sustainable Business, Business Survey Report, by J. Berman and T. Webb (Washington, D.C; October 2003). [31] KPMG, KPMG International Survey of Corporate Sustainability Reporting 2002. [32] G. Lange, Policy Applications of Environmental Accounting, Paper 88, World Bank Environmental Economic Series (Washington, D.C; January 2003). [33] United Nations, Integrated Environmental and Economic Accounting 2003: Handbook of National Accounting (1992). [34] CSERA was first published in 1997, then again in 2000, and updates are planned for 2004. [35] National Research Council, Nature's Numbers: Expanding the National Economic Accounts to Include the Environment (Washington, D.C.: National Academy Press, 1999). [36] National Science and Technology Council, Integrating the Nation's Environmental Monitoring and Research Networks and Programs: A Proposed Framework, a report by the Committee on Environment and Natural Resources (Washington, D.C; March 1997). [37] OMB reports on programs that receive direct funding of at least $500,000 on statistical activities, which include: (1) the planning of surveys and other techniques of data collection; (2) personnel; (3) collection, processing, or tabulation of statistical data for publication, dissemination, research, analysis, or program management and evaluation; (4) publication of data and studies; (5) methodological research; (6) data analysis; (7) forecasts or projections made available for governmentwide or public use; (8) publication of data collected by others; (9) secondary data series or models that are an integral part of generating statistical series or forecasts; (10) management or coordination of statistical operations; and (11) statistical consulting or training. [38] Paperwork Reduction Act of 1995, 44 U.S.C. 3504(e)(2). GAO's Mission: The Government Accountability Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site ( www.gao.gov ) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to e-mail alerts" under the "Order GAO Products" heading. Order by Mail or Phone: The first copy of each printed report is free. Additional copies are $2 each. A check or money order should be made out to the Superintendent of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or more copies mailed to a single address are discounted 25 percent. Orders should be sent to: U.S. Government Accountability Office 441 G Street NW, Room LM Washington, D.C. 20548: To order by Phone: Voice: (202) 512-6000: TDD: (202) 512-2537: Fax: (202) 512-6061: To Report Fraud, Waste, and Abuse in Federal Programs: Contact: Web site: www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov Automated answering system: (800) 424-5454 or (202) 512-7470: Public Affairs: Jeff Nelligan, managing director, NelliganJ@gao.gov (202) 512-4800 U.S. Government Accountability Office, 441 G Street NW, Room 7149 Washington, D.C. 20548:

The Justia Government Accountability Office site republishes public reports retrieved from the U.S. GAO These reports should not be considered official, and do not necessarily reflect the views of Justia.