Water Quality
Inconsistent State Approaches Complicate Nation's Efforts to Identify Its Most Polluted Waters
Gao ID: GAO-02-186 January 11, 2002
The Environmental Protection Agency (EPA) believes that more than 20,000 bodies of water throughout the country are too polluted to meet water quality standards. States use different approaches to identify impaired waters. This variation has led not only to inconsistencies in the listing of impaired waters but also to difficulties in identifying the total number of impaired waters nationwide and the total number of total maximum daily loads (TMDL) needed to bring such waters up to standards. Under the Clean Water Act and its regulations, EPA has given the states some flexibility to develop listing approaches that are tailored to their circumstances. However, some of the approaches have no appropriate scientific basis. States apply a range of quality assurance procedures to ensure the quality of data used to make impairment decisions. Although states have long used quality assurance procedures for the data they collect directly, they have become increasingly vigilant about applying such procedures to data from other sources. Because of inconsistencies in states' approaches to identifying impaired waters, the information in EPA's database of impaired waters is of questionable reliability. The number of impaired waters cannot be compared from one state to the next, and EPA cannot reliably tally the number of TMDLs that must be completed nationwide. EPA's database also distorts the size of some of the states' impaired waters when they are mapped on EPA's website.
Recommendations
Our recommendations from this work are listed below with a Contact for more information. Status will change from "In process" to "Open," "Closed - implemented," or "Closed - not implemented" based on our follow up work.
Director:
Team:
Phone:
GAO-02-186, Water Quality: Inconsistent State Approaches Complicate Nation's Efforts to Identify Its Most Polluted Waters
This is the accessible text file for GAO report number GAO-02-186
entitled 'Water Quality: Inconsistent State Approaches Complicate
Nation‘s Efforts to Identify Its Most Polluted Waters' which was
released on January 11, 2002.
This text file was formatted by the U.S. General Accounting Office
(GAO) to be accessible to users with visual impairments, as part of a
longer term project to improve GAO products' accessibility. Every
attempt has been made to maintain the structural and data integrity of
the original printed product. Accessibility features, such as text
descriptions of tables, consecutively numbered footnotes placed at the
end of the file, and the text of agency comment letters, are provided
but may not exactly duplicate the presentation or format of the printed
version. The portable document format (PDF) file is an exact electronic
replica of the printed version. We welcome your feedback. Please E-mail
your comments regarding the contents or accessibility features of this
document to Webmaster@gao.gov.
This is a work of the U.S. government and is not subject to copyright
protection in the United States. It may be reproduced and distributed
in its entirety without further permission from GAO. Because this work
may contain copyrighted images or other material, permission from the
copyright holder may be necessary if you wish to reproduce this
material separately.
United States General Accounting Office:
GAO:
Report to Congressional Requesters:
January 2002:
Water Quality:
Inconsistent State Approaches Complicate Nation‘s Efforts to Identify
Its Most Polluted Waters:
GAO-02-186:
Contents:
Letter:
Results in Brief:
Background:
States Use Varying Approaches to Identify Impaired Waters:
States Use a Range of Quality Assurance Procedures:
Reliability of EPA‘s Impaired Waters Database Limited by Inconsistent
Data:
Conclusions:
Recommendations for Executive Action:
Agency Comments and Our Evaluation:
Scope and Methodology:
Appendix I: Status of States‘ Monitoring and Assessment of Their
Waters:
Appendix II: Comments From the Department of the Interior:
Appendix III: GAO Contacts and Staff Acknowledgments:
GAO Contacts:
Staff Acknowledgments:
Table:
Table 1: Percentage of States‘ Waters Monitored and Assessed:
Figures:
Figure 1: Key Steps in Identifying Impaired Waters:
Figure 2: Types of Monitoring and the Pollutants or Conditions That
They Measure:
Figure 3: Percentage of States‘ Rivers and Streams Monitored and
Assessed:
Figure 4: Percentage of States‘ Lakes, Reservoirs, and Ponds Monitored
and Assessed:
Figure 5: The Sabine River Between Texas and Louisiana:
Figure 6: The Missouri River Between Nebraska and Iowa and Several
Small Streams on the Border of Nebraska and Kansas:
Abbreviations:
CALM: Consolidated Assessment and Listing Methodologies:
EPA: Environmental Protection Agency:
PCB: Polychlorinated biphenyl:
TMDL: Total maximum daily load:
USGS: United States Geological Survey:
WATERS: Watershed Assessment, Tracking, and Environmental Results:
[End of section]
United States General Accounting Office:
Washington, DC 20548:
January 11, 2002:
The Honorable Don Young:
Chairman, Committee on Transportation and Infrastructure:
House of Representatives:
The Honorable John J. Duncan, Jr.
Chairman, Subcommittee on Water Resources and Environment:
Committee on Transportation and Infrastructure:
House of Representatives:
Although the precise number is not known, the Environmental Protection
Agency (EPA) believes that over 20,000 bodies of water throughout the
country are too polluted to meet water quality standards. Among the
primary concerns associated with these waters are human health
problems, caused either directly by coming into contact with
contaminated waters or indirectly through consumption of contaminated
fish. Under the Clean Water Act, states must identify bodies of water
that are not meeting applicable state water quality standards and
submit a list of those waters to the EPA, along with an explanation of
the methodology used to identify them. To bring these waters into
compliance with the standards, states are required to establish a
pollutant ’budget“”or a total maximum daily load (TMDL)”for each
pollutant causing a body of water to be impaired. A TMDL is the maximum
amount of a pollutant that can enter into a body of water without
exceeding the water quality standard for a pollutant.
In March 2000, we reported that states have little of the information
they need to assess the quality of their waters and, in particular, to
identify those that are impaired”a particularly serious problem, given
the resources needed to address such impairments.[Footnote 1] Concerned
about possible inconsistencies in the way that states identify impaired
waters and EPA conveys information about such waters to the public, you
asked us to (1) identify and assess the effects of any differences in
states‘ approaches to identifying impaired waters, (2) determine how
states ensure the quality of data used to identify impaired waters, and
(3) assess the reliability of the information in EPA‘s database of
impaired waters. To respond to your questions, we analyzed written
methodologies that all 50 states and the District of Columbia submitted
to EPA with their lists of impaired waters. We also completed a
telephone survey of water quality officials from 15 randomly selected
states to obtain more detailed information about states‘ processes for
identifying impaired waters, identify the methods they use to ensure
the quality of data collected, and determine how accurately they
believe their state‘s water quality is reflected in information that
EPA provides to the public on the Internet. We also analyzed the EPA
database containing states‘ data on impaired waters and TMDLs.
Results in Brief:
The approaches used to identify impaired waters vary considerably among
states. Variation among the states stems from a combination of factors,
including differences in the (1) water quality standards (including
designated or beneficial uses and criteria) for determining which
waters are impaired; (2) types of monitoring practices used to
ascertain whether these standards are exceeded; (3) procedures used to
assess water quality data to make listing decisions; and (4) guidance
EPA regions give on grounds for removing waters from state lists of
impaired waters. This variation leads not only to inconsistencies in
the listing of impaired waters but also to difficulties in identifying
the total number of impaired waters nationwide and the total number of
TMDLs that states say will be needed to bring such waters up to
standards. Of particular note, there have been numerous cases in which
neighboring states share a common body of water that is listed as
impaired by one state but not by the other. Under the Clean Water Act
and its regulations, EPA has provided some flexibility to states to
develop listing approaches that are appropriate to their ecological and
other conditions. However, some of the variations in approaches have no
appropriate scientific basis. EPA has published one set of guidance
that it believes will address some of these inconsistencies. It is also
planning to issue a second set of guidance to improve consistency among
state approaches and in state methodologies.
States apply a range of quality assurance procedures to ensure that
data used to make impairment decisions are of sufficient quality. In
general, the procedures vary in their rigor. While states have long
used quality assurance procedures for the data they collect directly,
they have become increasingly vigilant about applying such procedures
to the data they use from other sources. Because of the significant
consequences of designating a body of water as impaired, officials from
all 15 states that we contacted said that they examine data from other
sources to determine quality”although the level of quality assurance
that the states apply varies. For example, we identified seven states
across the country that have passed data integrity laws prescribing
minimum data requirements, such as the number of samples needed to make
water quality determinations. EPA officials told us that, overall,
these states‘ efforts are an attempt to increase the quality and
credibility of their listing decisions. They cautioned, however, that
states should balance the need for quality with EPA‘s requirement that
they consider all readily available data to avoid rejecting data that
indicate an impairment could exist.
Owing, in part, to the inconsistencies in states‘ approaches to
identifying impaired waters, the information in EPA‘s database of
impaired waters is of questionable reliability. EPA has undertaken
significant efforts to integrate states‘ data and present it to the
public over the Internet, but the information it presents can be only
as good as the information the agency enters into the underlying
database. Inconsistencies in the data that states submit are compounded
by the different ways that they submit data to EPA for inclusion in the
system. For example, some states submit lists that count several small
segments of a river or stream as individually impaired waters, while
other states submit lists that identify larger segments of a river or
stream as impaired. As a result, the numbers of impaired waters cannot
be compared from one state to the next and EPA cannot reliably tally
the number of TMDLs that must be completed nationwide. In addition,
EPA‘s database distorts the size of some of the states‘ impaired waters
when they are mapped on EPA‘s Web site. Less than one-third of the
state water quality officials that we interviewed told us that their
state‘s water quality is reflected ’very“ or ’somewhat“ accurately on
the EPA Web site.
We are making recommendations to EPA aimed at increasing consistency in
the ways that states develop and make changes to their lists of
impaired waters. We are also recommending that EPA improve the way it
characterizes information on its Web site so that users more clearly
understand the limitations of the data presented. In commenting on a
draft of the report, EPA said that the recommendations were reasonable,
and noted that the agency has several initiatives under way to address
some of the issues raised in the report. We agree that EPA‘s
initiatives will help to address some of our recommendations. One of
the initiatives, however, a key guidance document called the
Consolidated Assessment and Listing Methodologies guidance, has not yet
been issued. In addition, the EPA initiatives do not fully address
recommendations designed to promote greater consistency in how states
remove waters from their impaired waters lists, and how they list
interstate waters. Accordingly, we did not revise the recommendations
contained in our draft report. We also provided the draft to the
Department of the Interior for comment. The Department‘s December 13,
2001, letter said that the report covered a complicated and detailed
topic well (see app. II).
Background:
The primary objective of the Clean Water Act is to ’restore and
maintain the chemical, physical, and biological integrity of the
Nation‘s waters.“ As authorized under the act, states have primary
responsibility for implementing programs to manage water quality. As a
first step, states set water quality standards to achieve designated
(or beneficial) uses for waters, such as recreational, agricultural,
industrial, or other uses. These standards are then used to determine
whether the states‘ waters are impaired, among other things.
In addition to establishing water quality standards, states are also
responsible for monitoring the quality of their waters, assessing all
readily available water quality data to determine whether the criteria
for particular waters are being met, and reporting on the quality of
their waters to EPA. Generally, to monitor water quality, states select
the rivers, lakes, and other waters for which they plan to collect data
during a specific period of time and collect water samples from them.
After the data are collected, the states analyze the data and compare
the results to their standards to assess whether the water bodies are
meeting standards. In assessing their waters, state agencies
responsible for water quality programs can also use data collected by
other state agencies, federal agencies, volunteer or academic groups,
and other entities. For example, one source used by many states is the
U.S. Geological Survey (USGS) within the Department of the Interior,
which has a large program for monitoring water quality. Under section
305(b) of the act, states are responsible for reporting biennially on
the quality of their waters, and EPA is responsible for compiling these
reports into the National Water Quality Inventory. As part of this
effort, EPA provides guidance to states on monitoring and assessing
their waters.
In addition to reporting on the overall quality of their waters, the
Clean Water Act requires states to identify waters that do not meet
applicable water quality standards. Specifically, section 303(d) of the
act requires states to list the waters within their state boundaries
for which certain technological controls required under the act are not
stringent enough to implement applicable standards. Under the act, EPA
must approve the states‘ lists. The 303(d) lists identify waters in
which pollutants need to be reduced. States are required to develop a
TMDL for each of the pollutants affecting each impaired body of water.
Under the act, if states do not establish TMDLs for impaired waters,
EPA must do so.
While the states are primarily responsible for managing water quality,
EPA is responsible for developing regulations and guidance implementing
the act. In 1985, EPA issued water quality regulations requiring states
to provide a list of impaired waters.[Footnote 2] In 2000, EPA
finalized major revisions to these regulations that would have required
the states to develop more comprehensive lists of impaired waters and
would have clarified the required elements of a TMDL. However, Congress
postponed implementation of these revisions, in part because of
widespread concerns among a variety of groups. Because the regulations
were in flux during 2000, EPA waived the requirement for states to
submit their lists that year; instead, states are required to submit
their next list by October 1, 2002. In October 2001, EPA further
postponed the effective date of the revised regulations to April 30,
2003. Prior to that time, EPA plans to develop a second set of revised
regulations.
Concern over the impaired waters program has led to years of litigation
among states, EPA, and interest groups. Lawsuits in 38 states have
resulted in almost two dozen consent decrees requiring states to
develop TMDLs or requiring EPA to develop them if states fail to do so.
At congressional hearings in 2000, we and other organizations raised
concerns over the ability of states to gather the data needed to
monitor their waters, and in particular to support the identification
of impaired waters needing TMDLs. As a result of these concerns,
Congress requested the National Academy of Sciences‘ National Research
Council to study the scientific basis for the TMDL program. The council
issued a report in June 2001 that expressed support for the TMDL
program but called for improvements in how impaired waters are
identified and how TMDLs are developed.[Footnote 3]
States Use Varying Approaches to Identify Impaired Waters:
While the general process that states follow to identify impaired
waters is similar, the specific approaches they use vary considerably
among states. Generally, the process involves establishing water
quality standards, gathering data on water quality through monitoring,
and assessing the data to determine whether the criteria and standards
are being met or whether a body of water is impaired (see fig. 1). If a
state determines that a previously listed body of water is no longer
impaired, then it can seek EPA‘s approval to remove that body of water
from its list. Variation in the approaches that states use occurs at
each step in the process and causes inconsistencies in the listing of
impaired waters. These inconsistencies are particularly apparent in
cases of interstate waters. EPA published one set of guidance in
November 2001 that it believes will address some of these
inconsistencies. It plans to issue a second set in early 2002 to
address other causes. However, EPA officials stated that the underlying
causes of inconsistent listings require long-term action.
Figure 1: Key Steps in Identifying Impaired Waters:
[Refer to PDF for image]
This figure is an illustration of the key steps in identifying impaired
waters, as follows:
1) Establish water quality standards;
2) Collect/access water quality data;
3) Water Meets standards:
* Yes: In water was on the 303(d) list, it can be removed; return to
step 2;
* No: 303(d) list; return to step 2.
Note: Not all waters are monitored and assessed each cycle.
Source: GAO analysis of EPA documents.
[End of figure]
Water Quality Standards Are Often Inconsistent:
Water quality standards can vary significantly among states. Variations
in water quality standards arise from differences among states in two
components of the standard-setting process: (1) the identification of
designated (or beneficial) uses for a particular body of water and (2)
the development of water quality criteria to protect each use.
According to EPA, some of these variations are appropriately based on
different ecological conditions but others are not. For example, states
with coastal plains could appropriately have lower standards for
dissolved oxygen than states with high mountain streams. The agency
also notes that other variations are often not appropriate.
Inappropriate variations may arise if states with shared or immediately
adjacent water bodies designate them for different uses. For example,
one may consider the water suitable for swimming and therefore have
more stringent water quality criteria, while a neighboring state may
consider the same water to be used for wading, which requires less
stringent criteria.
Designated Uses:
Designated uses are the beneficial uses established by states, based on
social and environmental factors that waters are intended to support.
For example, a water may be designated for use as a public water supply
or to support aquatic life, irrigation, or contact recreation.
Officials in some states said that the designated uses in their states
are appropriate while others did not. Of the 15 state officials that we
interviewed, 8 acknowledged that designated uses in their states need
to be revised. For example, all waters in Virginia are designated for
swimming even though some of the waters are inaccessible and too
shallow for swimming purposes. Other waters in Virginia are impaired by
bacteria from wildlife sources and cannot achieve the primary contact
use. As a result, these waters do not meet the water quality standard
set for them. In other states, in some cases where designated uses are
inappropriate and need revision, waters may be considered impaired by
natural water quality conditions. Yet, one state may list such waters
as impaired, while another might not. For example, according to their
1998 303(d) listing methodologies, Arizona precludes the listing of
waters impaired by naturally-occurring conditions while California
includes such waters on its list.
One explanation for the problems with many designated uses is that
states established many of them en masse in the early 1970s in order to
meet the requirements of the Clean Water Act. States had 180 days to
put designated uses in place, and many used the highly general goals of
the Clean Water Act”fishable-swimmable waters”as their designated uses.
In addition, implementation of the act initially focused on installing
controls on individual point sources of pollution and little attention
was paid to whether overall water quality met specific standards.
Reflecting these concerns, the National Research Council‘s recent
report states that many designated uses are too broad and need to be
refined in order to incorporate the range of scientific data and social
needs for water quality. The Council‘s report recommended that states‘
designated uses should be divided into several tiers to more adequately
represent water quality conditions and that water quality criteria
should have a more logical link to the designated use to sufficiently
measure attainment. According to responses from our 15-state survey,
such a refinement in states‘ designated uses and water quality criteria
would most likely result in different waters being listed.
Water Quality Criteria:
Water quality criteria provide thresholds for determining whether
bodies of water can support their designated uses. As with designated
uses, criteria used by states vary and in many states need updating.
Variation among states is primarily caused by different states focusing
on different pollutants, mainly because of differences in water quality
criteria. Illinois, for example, has numeric water quality criteria for
two pollutants” sediment and nutrients”for which neighboring Indiana
does not have numeric criteria. As a result, Illinois listed 32 percent
of its waters as impaired by sediment, while Indiana listed none.
Similarly, Illinois listed 22 percent of its waters as impaired by
nutrients, but Indiana listed less than 1 percent as so impaired. In
some instances, neighboring states may both have numeric criteria for a
given pollutant, but the criteria may differ significantly. Connecticut
and New York on the Long Island Sound have different criteria for
dissolved oxygen and, therefore, list the Sound differently.
States also vary in the extent to which they use narrative criteria
versus numeric criteria to make a listing determination. For example,
Nevada focuses its listing determinations on violations of numeric
water quality criteria. On the other hand, Massachusetts used narrative
criteria to list approximately one-third of its reported impairments
because it felt that the designated use was impaired. With the
criteria, Massachusetts considered a lake to be impaired (for swimming)
if noxious aquatic plants covered over 50 percent of its area.
Massachusetts‘ officials conceded that their narrative criteria may not
correctly identify when a lake is impaired for various uses, and they
are currently working on revising the water quality standards.
Other states also discussed the need to revise criteria that are
difficult to use in identifying impairments. Officials in 14 of the 15
states represented in our interviews believe that water quality
criteria in their states need to be revised. Their views are consistent
with the National Research Council, which noted in its report that
criteria should be measured by reasonably obtainable monitoring data
and should be defined in terms of magnitude, frequency, and duration.
Some state officials mentioned that they would like to switch their
narrative criteria to numeric criteria to provide a clearer threshold
for demonstrating whether an impairment exists. Officials indicating
their water quality criteria need to be revised told us that such
revisions could change the waters states have listed and the number of
waters listed. The most common pollutants for which the state officials
we interviewed believe water quality criteria need to be revised are
nutrients,[Footnote 4] bacteria, sediment, dissolved oxygen, and
metals. These five pollutants have been found to be among the leading
causes of impairment nationwide.
Monitoring Practices Differ Significantly:
States use a variety of monitoring practices. In order to determine
whether water quality standards are being met, states monitor their
waters by collecting samples of water or other indicators such as
sediment, fish, or macroinvertebrates. To establish a monitoring
system, states select which water bodies to monitor and determine,
based on their water quality standards, the conditions for which they
will sample and test. They also determine how often to take samples. In
addition to their own data, states can use data from other sources such
as universities, other federal and state agencies, and volunteer
groups. Variation in states‘ practices can be seen in the types and
comprehensiveness of each state‘s monitoring.
Types of Monitoring:
States monitor water quality conditions in three ways: chemical
monitoring is used to assess levels of dissolved oxygen, suspended
sediments, nutrients, metals, oils, and pesticides; physical monitoring
is used to assess general conditions such as temperature, flow, water
color, and the condition of stream banks and lake shores; and
biological monitoring is used to assess the abundance and variety of
aquatic plant and animal life and the ability of test organisms to
survive in sample water (see figure 2). USGS officials recommend that
states utilize all three types of monitoring to help ensure that water
quality conditions are adequately characterized. The officials
suggested that although biological indicators may be used to identify
the condition of the waters, physical and chemical factors such as
improving habitat or reducing discharges will be adjusted to achieve
biological goals. Similarly, the National Research Council reported
that biological indicators integrate the effects of multiple stressors
over time and space and recommended that they be used in conjunction
with physical and chemical criteria.
Figure 2: Types of Monitoring and the Pollutants or Conditions That
They Measure:
[Refer to PDF for image]
This figure illustrates the types of monitoring and the pollutants or
conditions that they measure, as follows:
Biological:
Assesses:
* structure and function of aquatic communities;
* habitat, such as condition of riparian vegetation;
* health and abundance of aquatic species or fish populations;
* indicator bacteria.
Physical:
Measures:
* temperature;
* conductivity;
* transparency;
* total suspended solids;
* flow.
Chemical:
Tests for levels of:
* pesticides;
* organics;
* metals (cadmium, arsenic, etc.);
* nutrients (phosphorous, nitrogen);
* toxic materials in fish tissue;
* dissolved oxygen.
Source: GAO analysis and interpretation of EPA data.
[End of figure]
States vary in their emphasis on these different types of monitoring.
For example, Illinois, Maine, and Ohio rely primarily on biological
monitoring while Texas and Utah rely on chemical and physical
monitoring. A 1998 Ohio study suggests how these divergent monitoring
approaches may yield different impairment determinations for waters.
[Footnote 5] This study found that of 645 waters monitored, 50 percent
met chemical but not biological criteria. It also showed that the
number of impaired waters in the state increased from 9 percent of
assessed waters in 1986 to 44 percent in 1988, and that the increase
was due primarily to the increased use of biological monitoring to
support numeric biologic criteria. Water quality managers in Utah
stated that they hope to increase biological and habitat monitoring
depending on available funding and it is probable that more impaired
waters would be identified and listed as a result.
In addition to differences in the types of monitoring that states
perform, states also differ in the emphasis that they place on various
pollutants in their monitoring programs. For example, according to
Indiana officials, Indiana conducts more bacteriological monitoring
than bordering states, and has consequently identified 13 percent of
its impaired waters as impaired by bacteriological pathogens. In
comparison, neighboring Illinois and Ohio, which conduct less
bacteriological monitoring, have identified only 1 and 2 percent,
respectively, of their impaired waters as impaired by such pathogens.
Comprehensiveness of Monitoring Programs:
States also vary in the comprehensiveness of their monitoring programs.
In 1998, the percentage of rivers and streams monitored and assessed by
states ranged from 0 to 100 percent; 39 states had monitored and
assessed under 50 percent of their rivers and streams. Similarly, the
percentage of lakes, reservoirs, and ponds monitored and assessed by
states ranged from 0 to 100 percent; 18 states monitored and assessed
less than 50 percent of these waters (see figs. 3 and 4). Finally,
several states that have estuaries and ocean shorelines monitored and
assessed 100 percent of these waters; however, other states have not
monitored and assessed these waters (see app. I for a detailed list of
the percentages by state). As we noted in our March 2000 report, state
officials told us that more comprehensive monitoring would have
identified more impaired waters. In the 50-state survey conducted for
that report, just 18 states reported that they had a majority of the
data they needed to place assessed waters on their 303(d) list. Most
respondents said that increased monitoring of their state‘s waters
would be most helpful in improving their 303(d) lists.[Footnote 6]
Figure 3: Percentage of States‘ Rivers and Streams Monitored and
Assessed:
[Refer to PDF for image]
This figure is a map of the United States depicting the percentage of
rivers and streams assessed in states as follows:
80% to 100% (8):
Delaware;
Hawaii;
Maine;
New York;
North Carolina;
Tennessee;
Washington;
Wyoming.
60% to 79%(1):
South Carolina;
40% to 59% (7):
Michigan;
Mississippi;
Missouri;
New Jersey;
Oregon;
Rhode Island;
Wisconsin;
20% to 39% (9):
Colorado;
Illinois;
Indiana;
Maryland;
New Hampshire;
North Dakota;
South Dakota;
Virginia;
West Virginia;
0 to 19% (25):
Alabama;
Alaska;
Arizona;
Arkansas;
California;
Connecticut;
Florida;
Georgia;
Idaho;
Iowa;
Kansas;
Kentucky;
Louisiana;
Massachusetts;
Minnesota;
Montana;
Nebraska;
Nevada;
New Mexico;
Ohio;
Oklahoma;
Pennsylvania;
Texas;
Utah;
Vermont.
Source: EPA‘s 305(b) report for 1998.
[End of figure]
Figure 4: Percentage of States‘ Lakes, Reservoirs, and Ponds Monitored
and Assessed:
[Refer to PDF for image]
This figure is a map of the United States depicting the percentage of
states‘ lakes, reservoirs, and ponds monitored and assessed, as
follows:
80% to 100%:
Alabama;
Delaware;
Georgia;
Kansas;
Kentucky;
Maine;
Missouri;
Montana;
New York;
North Carolina;
North Dakota;
Oregon;
Rhode Island;
Tennessee;
Utah;
Virginia;
West Virginia.
60% to 79%:
Arkansas;
Nevada;
Illinois;
Minnesota;
Wisconsin;
40% to 59%:
California;
Connecticut;
Florida;
Louisiana;
Maryland;
Massachusetts;
Mississippi;
New Jersey;
Oklahoma;
South Carolina;
Texas;
Washington.
20% to 29%:
Arizona;
Colorado;
Indiana;
New Hampshire;
South Dakota.
0 to 19%:
Alaska;
Hawaii;
Idaho;
Iowa;
Michigan;
Nebraska;
New Mexico;
Ohio;
Pennsylvania;
Vermont;
Wyoming.
Source: EPA‘s 305(b) report for 1998.
[End of figure]
States are required by regulation to assemble and evaluate ’all
existing and readily available water quality-related data and
information,“ including data from external sources such as federal
agencies, volunteer or academic groups, and other entities. However,
states vary in their use of these sources of data. Officials we
interviewed from 7 of the 15 states said that they used external
sources of data to a ’moderate“ extent and officials from 5 states said
they use the sources to a ’minor“ or ’very minor“ extent. Most state
officials commented that external data and information received,
although not used to make listing determinations, triggered follow-up
monitoring by the state.
States Use Different Data Assessment Methods:
After states collect data, they must have methods in place to assess
the data to determine whether waters are impaired. States vary widely
in their use of such assessment methods. The key differences that we
found in states‘ assessment methods were (1) the extent to which states
make listing determinations based on ’monitored“ versus ’evaluated“
data, (2) how states use fish consumption advisories in making
impairment decisions, and (3) how states compare water quality data
with water quality criteria in determining whether waters meet
standards.
Use of Monitored Versus Evaluated Data:
According to EPA, monitored data are those that have been collected
within the past 5 years and are believed to accurately portray water
quality conditions. In contrast, evaluated data include monitored data
that are more than 5 years old, as well as other types of information
such as land-use data, predictive models, and other less precise
indicators of water quality. The extent to which states use evaluated
versus monitored data varies. For example, officials from 4 of the 15
states we contacted told us that at least 20 percent of the waters they
listed as impaired were based solely on evaluative data, while
officials in another 4 states explained that none of their impairment
listings were based solely on such data. States also vary in how they
define monitored data. According to our analysis of the 50 states‘
methodologies, some states considered data as ’monitored data“ only if
the data were collected within the past 5 years (as recommended by
EPA), while other states used a 7- to 10-year threshold.
Use of Fish Advisories to Make Impairment Decisions:
States varied considerably in their reliance on fish consumption
advisories as a basis for listing impaired waters. In 1998, 47 states
issued a fish consumption advisory of some kind, according to EPA‘s
National Listing of Fish and Wildlife Consumption Advisories database.
However, only 15 states had waters that were listed as impaired because
of a fish consumption advisory, based on their 1998 303(d) list. Most
of the other states either chose not to list their waters as impaired
or counted a fish advisory as a single impairment for the entire state
rather than counting each of the state‘s affected waters. For example,
Wisconsin issued 447 fish consumption advisories for individual waters
in 1998 and listed 307 waters as impaired for a fish consumption
advisory in their 1998 303(d) list. On the other hand, Minnesota issued
825 fish consumption advisories for individual waters in 1998 but
listed no waters as impaired for a fish consumption advisory in their
1998 303(d) list. EPA issued guidance on October 24, 2000, to help
remedy this inconsistency between states by recommending that a state
list a body of water as impaired if a fish consumption advisory shows
that water quality standards are not being met.
Methods to Determine Compliance With Water Quality Standards:
States also vary widely in the methods they use to compare water
quality data with water quality standards to determine whether waters
are impaired. To determine whether water quality data demonstrate an
impairment, states need to compare the data to the appropriate
criteria. For monitored data, which may include multiple samples from
one body of water, states decide how many samples need to exceed the
criterion for a particular pollutant before that water is considered
impaired. States vary both in the percentage of samples exceeding water
quality standards that are needed to consider a body of water as
impaired, and in the number of samples that need to be taken to
consider the sampling data as representative of actual conditions. For
example, as recommended by EPA, most states list waters as impaired by
conventional pollutants if 10 percent of the samples taken exceed water
quality standards. However, some states, such as Kansas and Nevada,
list waters as impaired only if the water quality standard is exceeded
in more than 25 percent of collected samples. In addition, some states
require a minimum data set of 10 samples, while other states, such as
Nevada and Arizona, require only 4 samples. Time frames within which
the minimum number of samples must be collected also vary. Wyoming
requires 10 samples to be collected over a 3-year period, while
Nebraska requires 10 samples to be collected over a 5-year period.
States Remove Waters From Their Lists for Various Reasons:
The option for states to remove listed waters is important because, as
EPA and states acknowledge, in the past many waters were listed
inappropriately. The reasons vary. For example, officials in one state
said that they mistakenly assessed some waters against higher standards
than necessary, which resulted in a number of waters being placed
inappropriately on their 303(d) list. In some cases, waters were listed
initially on the basis of little or no data. For example, officials
from one state told us that about half the waters on its 303(d) list
were listed on the basis of evaluated data. Upon additional monitoring
of these waters, the state found that many meet standards and should
therefore be removed from the 303(d) list.
EPA regulations require states to demonstrate ’good cause“ before an
impaired water can be removed from a 303(d) list.[Footnote 7]
Specifically, once a water body is listed as impaired, it must remain
on the list until a TMDL is developed unless good cause is shown to
remove it. According to the regulations, good cause includes (1) new
data showing improvement in the water; (2) new information showing a
flaw in the original impairment decision; or (3) changes in
technological conditions, such as the use of new control equipment.
Nonetheless, based on our analysis of the 50 states‘ methodologies,
states vary in their methods and justifications for delisting waters.
These findings were corroborated by our interviews with officials in
the 15 states we contacted, which demonstrated a widely diverse
experience in the delisting process. For example, officials in 11 of
the 15 states represented in our interviews cited a variety of reasons
for delisting waters, including their belief that some waters were
incorrectly listed in the first place; that the quality of some waters
had improved; or that a TMDL was established for the water, eliminating
the need to keep it on the 303(d) list.
We found that EPA regions play a key role in advising states on
delisting matters. Some state officials told us that they had received
guidance from their EPA regional counterparts on how to remove waters
from their lists, while others reported receiving no such guidance.
Moreover, the states that did receive guidance from their regional EPA
office were provided with different ’burdens of proof“ before a body of
water could be delisted. For example, state officials in one region
said that their region‘s policy allowed them the flexibility to delist
a water using the same method that was used to list the water in the
first place without new data. State officials in another region,
however, said that regardless of how a body of water was originally
listed, they could remove it only if they had new data showing that the
body of water was now meeting water quality standards. Similarly, one
region will allow states to remove waters that are not meeting water
quality standards but that have an EPA-approved TMDL in place. Another
region, however, will not support a delisting based only on an approved
TMDL.
States List Interstate Waters Inconsistently:
Evidence of variability in water quality standards, monitoring
practices, assessment methods, and delisting methods is perhaps most
clearly illustrated when examining waters that cross state boundaries
or serve as a boundary between states. Interstate waters often lie in
areas with similar ecological conditions. Yet because of varying
approaches by states in identifying impairments, situations have arisen
frequently in which one state designates a body of water as impaired
while another state does not, or in which one state designates a body
of water as impaired for a certain pollutant while another state finds
it impaired for a different pollutant.
EPA and the states have identified numerous inconsistencies of this
kind. Examples include the following:
* According to the 1998 303(d) list, Rhode Island lists the Abbot Run
Brook, which flows from Massachusetts into Rhode Island, as impaired to
protect the brook‘s designated use as a drinking water source.
Massachusetts does not list the brook because the state has not
designated it for use as drinking water”a more stringent designated
use.
* The Rio Grande, which flows from New Mexico and then forms the border
between Mexico and Texas, is considered by Texas to be used for
swimming”a ’primary“ human contact”and, therefore, Texas has a
stringent standard for fecal coliform bacteria in the river. Texas
currently lists the river as impaired for this pollutant according to
its 1998 303(d) list. New Mexico, however, designates the river for
wading”a ’secondary“ human contact. It therefore uses a less stringent
standard for fecal coliform bacteria, and therefore does not list the
river.
* The Sabine River along the border between Texas and Louisiana, south
of the Toledo Bend Reservoir, is listed by Texas as impaired for
pathogens on its 1998 303(d) list but not by Louisiana. The discrepancy
is attributed to a difference in water quality criteria for fecal
coliform bacteria to meet the contact recreation designated use as set
in both states (see figure 5).
Figure 5: The Figure 5: The Sabine River Between Texas and Louisiana:
[Refer to PDF for image]
This figure illustrates the Sabine River between Texas and Louisiana.
Source: EPA.
[End of figure]
* The Menominee River, which forms the boundary between the northeast
corner of Wisconsin and the southern tip of the Upper Peninsula of
Michigan, is included in Michigan‘s 1998 303(d) list as impaired
because of dioxin, pathogens, mercury, and a fish consumption advisory
for polychlorinated biphenyls (PCB). The river is listed for a fish
consumption advisory for mercury and PCBs in Wisconsin but it is not
listed for dioxin or pathogens because of differences in the timing of
monitoring and the type of monitoring conducted by the two states.
* Sugar Creek, flowing from North Carolina into South Carolina, is
listed as impaired for zinc in South Carolina but is not listed for
zinc in North Carolina according to the 1998 303(d) list. Both states
have the same water quality standard for zinc, but the pollutant was
not identified in North Carolina because it uses different monitoring
practices than South Carolina.
* The Missouri River, along the border between Nebraska and Iowa, is
listed in the 1998 303(d) list as impaired for pathogens in Nebraska
but not in Iowa. Both states have the same primary contact recreation
standard, but Iowa made its determination based on data from one
monitoring station while Nebraska used data from multiple monitoring
stations. On the other hand, the river is listed as impaired for
sediment in Iowa but not in Nebraska. Neither state has a numeric
criterion for sediment; hence, the difference in interpretations has
led to a difference between the two states (see figure 6).
Figure 6: The Missouri River Between Nebraska and Iowa and Several
Small Streams on the Border of Nebraska and Kansas:
[Refer to PDF for image]
The figure illustrates the Missouri River between Nebraska and Iowa and
several small streams on the border of Nebraska and Kansas.
Source: EPA.
[End of figure]
For several small streams on the border of Kansas and Nebraska, Kansas
has done more monitoring than Nebraska, which is in the process of
developing its monitoring network. As a result, Kansas has identified
waters with impairments, while Nebraska has not (see figure 6).
Officials in 12 of the 15 states that we contacted told us they believe
it is ’somewhat“ or ’very“ important that states collaborate when
making listing decisions regarding cross-jurisdictional waters. At the
same time, officials from 10 of the states also told us that they have
not collaborated with neighboring states to make listing decisions, and
officials from 5 of the states reported that they do not plan to
collaborate with neighboring states in the future. According to a
recent report by EPA‘s Office of Inspector General, lack of
collaboration between neighboring states was a primary contributor to
inconsistent interstate listings.
Importantly, officials in 13 of the 15 states that we contacted
reported that they have not received any guidance or assistance from
EPA aimed at increasing consistency in the way states list interstate
waters. Most of the states told us that they believe EPA should play a
facilitator/mediator role and help states work together to make listing
decisions on interstate waters. In connection with this, EPA officials
noted that river basin commissions may serve as a forum for resolving
inconsistent interstate listings. For example, the Delaware River Basin
Commission, the Ohio River Valley Water Sanitation Commission, and the
Susquehanna River Basin Commission have brought states together to
discuss different approaches and data.
EPA Has Recently Begun Efforts to Improve Consistency Among States:
EPA and many states have acknowledged variations in states‘ listing
approaches and the consequent inconsistencies, while at the same time
recognizing that some level of state flexibility is appropriate in
developing standards, monitoring water quality, and performing
assessments. To improve consistency, EPA published one set of guidance
in November 2001 and plans to issue a second set in early 2002. The
first set is the 2002 Integrated Water Quality Monitoring and
Assessment Report (Integrated Listing) guidance and the second set is
the Consolidated Assessment and Listing Methodologies (CALM) guidance.
Integrated Water Quality Monitoring and Assessment Report Guidance:
EPA‘s Integrated Listing guidance will merge existing guidance for
monitoring and assessing waters under section 305(b) of the Clean Water
Act and identifying impaired waters under section 303(d) and, according
to EPA, will result in a more comprehensive and consistent description
of states‘ waters, including impaired waters. States are currently
required to provide two separate lists of their impaired waters”one for
EPA‘s National Water Quality Inventory under section 305(b) and the
other under section 303(d). The lists in each case have been created
for different purposes. In the case of the inventory, the impaired
waters are listed as part of a general effort to characterize the
condition of each state‘s and the nation‘s waters. The impaired water
lists required under section 303(d) are prepared for the more
significant purpose of determining which waters need TMDLs and
potential remediation. In addition to the administrative burden of
submitting two separate lists, the divergent purposes of these lists
have led to inconsistencies between the two.
To address these inconsistencies, the Integrated Listing guidance will
create five categories in which states will rank their waters: (1)
waters that are attaining standards, (2) waters that are either meeting
some standards and are not threatened in other standards, or that do
not have enough information to list; (3) waters with insufficient data
to make a listing decision; (4) waters that are impaired or threatened
for one or more standards but for which a TMDL does not need to be
developed;[Footnote 8] and (5) waters that are impaired and need a
TMDL. The guidance also recommends that the states use the National
Hydrography Dataset to geographically define and reference their
waters. The dataset provides comprehensive coverage of all waters and
allows for a common framework for all states to use in addressing
individual segments of waters across the United States.
Consolidated Assessment and Listing Methodologies Guidance:
EPA‘s proposed CALM guidance relies on state methodologies as a vehicle
to increase the consistency among state approaches in developing their
lists. The guidance contains ’best practices“ from state methodologies,
such as appropriate ways to document statistical approaches used to
assess monitored data or to document data quality considerations. In
the short run, the CALM guidance is intended to improve states‘ listing
approaches by improving the documentation of their water quality
assessments and by making their listing decisions more transparent. In
the long run, the guidance is also expected to result in more
comprehensive and effective state water quality monitoring programs.
According to EPA officials, sharing best practices among states
increases the likelihood of states adopting similar approaches.
Our findings support EPA‘s assessment that state methodologies need to
be more thorough and that the states‘ decision-making processes should
be more transparent. States are required to include their methodologies
for listing, including a reason for any decision not to use existing
and readily available data and a description of the methodology used to
develop the list, with their lists. However, we found that the 1998
methodologies that the states submitted were inconsistent in the amount
and type of information provided. The methodologies ranged from a few
pages that generally explained state decision-making processes to much
more comprehensive documents detailing state monitoring practices and
assessment methods. According to EPA, encouraging states to disclose
more about their methods could help to alleviate inconsistencies in
state listings by more fully explaining sources of inconsistency.
States Use a Range of Quality Assurance Procedures:
States use a range of quality assurance procedures to ensure that the
data they use to assess their waters are valid. Most states have
quality assurance programs for their own monitoring efforts, which are
generally based on EPA guidance. In addition to the data that they
generate themselves to make listing decisions, states are required by
regulation to consider existing and readily available data from other
sources, such as universities, volunteer groups, and other state or
federal agencies. In doing so, states are relying increasingly on
quality assurance requirements to help ensure the accuracy and
reliability of such external data. For example, some states passed
credible data or data integrity laws that establish requirements for
the quality or quantity of all data used to make impairment decisions.
EPA officials told us that increasing quality assurance improves the
reliability of the data on impaired waters, but they cautioned that
avoiding some data because of quality concerns could increase the risk
of not being able to identify some impaired waters.
Quality Assurance Programs Designed to Support Impairment Decisions:
Quality assurance programs for environmental data are designed to
provide assurance that the data are of sufficient quality and quantity
to support impairment decisions. As recipients of EPA funding, states
are required to have both a quality management plan and quality
assurance project plans to help ensure the validity of impairment
decisions. A quality management plan is a management tool that
documents an organization‘s system for planning, implementing,
documenting, and assessing its environmental data collection and
assessment methods. Within the overall plan, an organization develops
project-specific quality assurance project plans that serve as a
’blueprint“ for data collection, handling, analysis, and management on
that particular project. EPA has guidelines for states to follow in
designing both their quality management plans and their project plans.
A key element of quality assurance for environmental data, including
water quality data, is the use of standard operating procedures for
data collection and analysis. Standard operating procedures involve
specific activities to manage a data collection project, collect and
handle water samples, analyze the samples, and manage the resulting
database. These procedures demonstrate that the data created and used
by the states are scientifically valid, legally defensible, and
credible. For example, one procedure to assure the integrity of the
data is to have a ’chain of custody“ for water samples, if a chemical
analysis is to be undertaken. This chain of custody is evidence that
the water samples could not be tampered with or tainted. Another
example of a procedure to assure the quality of a water sample is the
calibration of testing instruments.
The use of standard sampling procedures, in particular, is important to
provide accurate data for impairment decisions. For example, because
its previous methods were determined to be inadequate, USGS developed
stringent procedures to sample for trace metals and EPA has recommended
that these procedures be used by states. However, according to USGS and
EPA officials, states have the flexibility to select their sampling and
data analysis procedures and not all states use the more stringent
methods. According to the officials, the stringent methods are more
intensive and expensive and could place a burden on state monitoring
programs. According to USGS officials, the purpose of its stringent
procedures is to discover the specific amounts of trace metals in a
water body to depict current conditions and allow for delineation of
trends in water quality. On the other hand, states may only need to
know if their standards or criteria are met, and those criteria levels
may be much higher than the actual concentrations measured by USGS
methods. The officials also said that states can use alternative
procedures if they collect quality control data for their water
samples. Such quality control data include a variety of ’blank“ tests,
which are samples that can be used to identify whether any contaminants
are coming from the sampling equipment, such as the containers,
filters, and fixatives used to collect samples.
According to an EPA monitoring official, the most important and
challenging quality assurance issue that states face is the sufficiency
of their monitoring networks and the amount of data available to make
impairment decisions. For each water body sampled, states need to have
a sufficient number of samples to support an impairment decision.
However, because of the large number of waters that states need to
monitor and the fact that the waters need to be sampled several times,
the states are often constrained in the number of samples they can take
for each one.
According to USGS officials, sampling is sometimes complicated by the
need to take samples at different times. Depending on the pollutant,
water samples need to be taken at various times of the day to reflect
different physical conditions in a water body. For example, dissolved
oxygen fluctuates naturally during a 24-hour cycle and as a result,
samples taken at different times of the day will likely provide
different levels of dissolved oxygen.
Water Quality Data Are Increasingly Subjected to Quality Assurance
Requirements:
States have had quality assurance programs in place for their own data
for several years. As recipients of federal funds for water quality
monitoring, states are required to have such programs for their own
data gathering efforts. Officials in 14 of the 15 states represented in
our interviews said that they have procedures that must be followed
during their own state monitoring efforts. Officials from the remaining
state said that much of its work is contracted out or granted to groups
that use quality assurance steps. State officials said that their
procedures were documented in manuals and guidance. EPA officials
stated that the states‘ efforts to increase quality of data will result
in more credible listings, but that states should continue to consider
existing and readily available data and be wary of rejecting any data
that may indicate that an impairment exists.
Data Gathered From External Sources:
States are considerably more wary about the quality of the data that
they use from external sources. While states generally do not require
external groups to follow their own data collection procedures, they
have become increasingly concerned about the quality of data that
external groups submit and are therefore asking them to document their
quality assurance procedures. Officials from most of the 15 states
contacted told us that they attempt to assess the quality of the data
presented from external sources. Officials from eight states said they
require that data from other sources be accompanied by a quality
assurance plan and that if no quality assurance plan is submitted with
the data, they do not use that data. Some other state officials that we
interviewed said that, while they do not require the submission of a
quality assurance plan or the use of specific collection procedures,
they do require the analysis of the samples to be done by a state
certified lab. Officials from one state mentioned that they are
comfortable with data obtained from either federal or other state
agencies because they are familiar with the agencies‘ data collection
methods and accept the data accordingly.
As a result of their concern over the quality of data, many states
limit the data they use from outside sources. Officials from 7 of the
15 states told us that there are some sources of data that the state
will not use to make listing determinations, including voluntarily
collected data. The officials in the remaining states said that they do
not limit sources of data, but may eliminate data that are not of
sufficient quality for listing purposes.
Officials from 5 of the 15 states said that they use external data to a
’minor“ or ’very minor extent.“ For example, South Carolina makes most
of its impairment decisions based on its own state data, in part
because it does not receive much external data. Only three states use
data from external sources to a ’great“ or ’very great“ extent. For
example, Georgia accepts most external sources of data, including data
from universities, state and federal agencies, and local governments.
Utah, through its cooperative monitoring program with local, state, and
federal entities, also attempts to use many of the monitoring data
provided by external sources.
Even when state officials decline to use data from external sources to
make listing decisions, they sometimes find it useful as a ’trigger“
for further monitoring work. Officials from 8 of the 15 states said
they use external sources to identify potentially impaired areas in
which to conduct future state monitoring and assessment efforts.
State Data Integrity Laws:
In light of states‘ increased concerns over the quality of data used to
make important impairment decisions, we identified seven states
nationwide that have passed data integrity laws that establish
requirements for the quality or quantity of data used to make these
decisions. Many states use EPA guidance that provides that waters with
10 percent of the data showing an exceedance of a criterion can be
listed as impaired. After passing such a law in 2000, Florida has since
written state regulations providing that the state should have at least
20 data points to make an impairment decision. In addition, the
regulations establish the number of exceedances that are needed to
declare a water impaired. For example, the regulations require that at
least 5 samples should exceed the water quality standard for a water
with 20 samples overall. Arizona‘s regulations require that state water
quality officials use only ’reasonably current, credible, and
scientifically defensible data.“ Data are considered credible and
scientifically defensible if appropriate quality assurance and control
procedures were used and documented in collecting data. Virginia‘s law
requires the state water quality officials to consider reasonable data
as data that are no older than 5 years. Wyoming‘s law requires the
state to have three types of data”chemical, physical, and biological”in
order to list a body of water as impaired.
Balancing Data Availability and Quality Control:
EPA officials told us that, overall, the data quality improvements
states are seeking are appropriate. They cautioned, however, that the
need for quality must be balanced with the requirement under
regulations to use all readily available data as part of the assessment
of water quality. Under EPA‘s regulations for listing impaired waters,
states are to consider all readily available data as they assess the
quality of their waters. However, increasing standards of data quality
may result in the rejection of some data, with the risk that some
impaired waters might not be identified. State and EPA officials
suggested that the preferred way to handle data that do not meet
quality assurance standards is to use the data as a trigger for follow-
up monitoring, as some states appear to be doing based on our
interviews. Furthermore, EPA and some state officials indicated that
data from external sources can extend the state‘s monitoring resources.
Accordingly, they have sought to establish guidance and training for
volunteer monitoring programs. For example, Massachusetts has developed
guidance for volunteer monitors and uses quality assured data gathered
by these groups along with its own data to make decisions about whether
or not waters are impaired and should be on the 303(d) list. Where data
quality is questionable, Massachusetts identifies the segment in its
water quality assessment reports for additional follow-up monitoring to
confirm and document the impairment.
The National Research Council report supports the idea of using lower-
quality data to identify states‘ monitoring needs. The report addressed
the issue of data quality by suggesting that a ’preliminary list“ of
waters be developed to report waters suspected of being impaired and
needing further monitoring. The Council states that in situations where
minimal data or evaluated data are available, the data may not be
sufficient for listing a body of water as impaired but may be valuable
for identifying potentially impaired waters. As noted previously, EPA‘s
Integrated Listing guidance incorporates the concept of different lists
and also recommends that states develop a monitoring strategy to deal
with waters for which sufficient data do not exist. Officials from two-
thirds of the 15 states that we interviewed agreed that such a list
would be useful as a way to deal with uncertain data. Officials from
the remaining states cautioned that the list may not be a good idea.
One state said that it could be perceived as a requirement to monitor
the waters, which could create a burden on state monitoring programs
and resources.
Reliability of EPA‘s Impaired Waters Database Limited by Inconsistent
Data:
Owing, in part, to the inconsistencies in states‘ approaches to
identifying impaired waters, the information in EPA‘s database of
impaired waters is of questionable reliability. EPA has incorporated
the states‘ data on impaired waters into a large database and has
recently made this information available to policymakers and the public
over the Internet. In addition to the inconsistencies in the ways that
states identify their waters as impaired, there are inconsistencies in
how states report critical information to EPA for inclusion in the
database. In some cases, EPA‘s database and the information portrayed
on its Web site contain inaccuracies. One-third of state officials we
interviewed said that EPA‘s Web site did not portray their state‘s data
accurately.
EPA has undertaken efforts to improve the public‘s access to
information on impaired waters nationwide by upgrading its Internet
capabilities. Specifically, EPA has used the data on impaired waters
submitted by the states to create a large database of information,
called the TMDL Tracking System, which is one of the databases used by
the Watershed Assessment, Tracking, and Environmental Results (WATERS)
system. Both the TMDL database and WATERS are used to convey
information on EPA‘s Web site. The TMDL database includes data related
to states‘ listings, the causes of impairment, court decisions related
to the lists, TMDL schedules, and other information necessary to
understand the status of states‘ listings and TMDL programs. The
database can be used to generate summary reports on the impaired waters
of a state. The TMDL database is linked to WATERS, which enables the
data to be displayed on maps. WATERS unites water quality information
previously available only on individual state agency homepages and in
several EPA databases that support EPA‘s Web site. In the future, EPA
plans to include additional information, such as no discharge zones and
monitoring stations.
With any such system, the information presented can be only as good as
the data entered into the supporting database. Accordingly,
inconsistencies in the data submitted by states, as well as inaccurate
data in some cases, raise questions about the reliability of the TMDL
database and of WATERS.[Footnote 9] Of greatest consequence, the
variation in states‘ standards, monitoring, assessment, and listing
practices, as discussed previously, results in inconsistencies in EPA‘s
database. For example, the wide variation in states‘ monitoring
programs means that states have widely different bases upon which to
make impairment decisions, resulting in varying numbers of impaired
waters among states. Such inconsistencies help to explain why the
numbers of waters identified as impaired by states range from as low as
37 for one state but exceed 1,000 for several others. These
inconsistencies also make it difficult to aggregate data from
individual states into a national picture or to compare the quality of
waters from one state to the next.
Variations in how states report critical data to EPA for incorporation
into the TMDL database also undermine its reliability. Because states
identify the size of impaired waters differently, EPA‘s tally of both
the total number of impaired waters nationwide and the number of TMDLs
that must be established is not reliable. More specifically, some
states submit lists that count several small segments of a river or
stream as individually impaired waters, while others submit lists that
count larger segments of a river or stream as impaired. Illinois, for
example, breaks the Mississippi River into many segments, while
Missouri breaks the Mississippi River into three segments. As another
example, Indiana‘s impaired water segments for one river were reported
separately by EPA for each impairment, while Illinois‘ impaired water
segments for the same river were listed once, with all impairments
noted under the single listing. As a result, according to an Indiana
water official, the state may therefore appear to have more impaired
water segments than it actually does. This variation may be alleviated
by EPA‘s Integrated Listing guidance. As recommended by the National
Research Council, the guidance encourages states to use one
georeferencing system, called the National Hydrography Dataset, to
define the waters within their borders.
Because states currently use a number of different ways to define their
waters, when EPA transfers their data into the WATERS system, errors
may result in the presentation of the information on the Web site.
Overall, less than one-third of the state water quality officials that
we interviewed told us that their state‘s water quality is reflected
’somewhat“ or ’very“ accurately on the EPA Web site. A Connecticut
water quality official explained that the state‘s water quality is
inaccurately reflected on EPA‘s Web site as a result of a scaling
problem. The official explained that while there are waters in
Connecticut that are impaired for very localized areas, the EPA Web
site depicts that impairment over a much larger area, thereby
overestimating the problem area and giving the public the sense that
the problem is bigger than it truly is. Similarly, Massachusetts uses
smaller-scale watersheds to identify impaired waters, and EPA uses
larger-scale watershed data. This results in the waters in
Massachusetts being listed at the aggregate level, thus inappropriately
documenting the geographical extent of the problem. This oftentimes
results in giving the sense of a larger problem than the one conveyed
by the state and will mask multiple problems within a smaller
geographical area. EPA officials said that the agency attempts to
present states‘ data as submitted to avoid misrepresenting the
information, and that the agency provides states with the opportunity
to review and revise the database information. They further noted that
this issue may be resolved by the states using the National Hydrography
Dataset.
Conclusions:
States need some degree of flexibility in the way they list their
impaired waters to account for their particular ecological conditions
and other unique characteristics. Indeed, some flexibility in key
listing-related functions, such as the adoption of water quality
standards and water quality monitoring, is provided under both the
Clean Water Act and EPA regulations. However, flexibility currently
exists beyond what is needed to address local ecological
characteristics or other differences. States have developed varied
approaches to setting water quality standards, monitoring water
quality, and assessing water quality data to make listing
determinations. States have also developed inconsistent methods and
justifications for removing waters from their lists, based in part, on
inconsistent interpretations of EPA guidance by EPA regions. Moreover,
current EPA policy has allowed wide disparities in how states describe
their methodologies for identifying and listing impaired waters.
The inconsistency in state approaches is most apparent in bodies of
water that are shared by neighboring states but which are often listed
differently by them. Such inconsistencies can engender doubt about the
accuracy of the listings and states‘ abilities to correctly identify
impaired waters. If states cannot correctly identify impaired waters,
they cannot efficiently channel efforts or resources to develop TMDLs
for improving water quality. While the problem of inconsistent
interstate listings has been clearly demonstrated, few states have
received any guidance or assistance from EPA on how to address it. Many
have indicated that EPA can usefully serve as a mediator and/or
facilitator in helping states to work together in making listing
decisions on such waters.
In its regulatory role, EPA needs to be able to ascertain the nature
and extent of impairments on a national level and to provide a coherent
picture of water quality to policy makers and the public. Inconsistent
state approaches have undermined EPA‘s ability to provide such a
picture. We acknowledge the inherently difficult problems EPA faces in
presenting an accurate picture of states‘ impairment data, and its
efforts to address them. While EPA has undertaken significant efforts
to convey information about impaired waters over the Internet, this
information is potentially misleading in its current state and will be
of limited value until EPA improves the reliability of the data.
Recommendations for Executive Action:
To provide greater consistency in the way states list their impaired
waters, we recommend that the Administrator, EPA:
* provide additional guidance to the states on carrying out the key
functions (including standard-setting, water quality monitoring, and
data assessment) that influence how states identify the waters for
their section 303(d) lists;
* work with the agency‘s regional offices to ensure a more consistent
interpretation of the agency‘s policies on the criteria that states
must meet to remove waters from their section 303(d) lists;
* provide clear guidance to the states on the information they should
use to describe their methodologies for developing their section 303(d)
lists; and;
* work with the states to help resolve discrepancies that arise in the
listing of interstate waters. In pursuing such a role, the agency could
benefit from the activities of the nation‘s river basin commissions,
which are already attempting to assist their states in making
interstate listing decisions.
In addition, until EPA‘s Office of Water resolves problems relating to
inaccurate and/or misleading data contained in its WATERS database, we
recommend that the Administrator direct that office to explain clearly
and visibly to users of its impaired waters Web site the potential
misinterpretations that may arise from its current presentation of these
data.
Agency Comments and Our Evaluation:
We provided EPA and the Department of the Interior with a draft of this
report for review and comment. EPA did not submit a formal letter but
did provide comments from officials in the agency‘s Office of Water.
Overall, the officials said that our treatment of the issues raised in
the report accurately reflects discussions we have had with Office of
Water officials and that our recommendations are reasonable. The
officials also described initiatives under way that are germane to our
recommendations concerning the need to (1) increase greater consistency
in how states list their waters and (2) convey to users of EPA‘s
impaired waters Web site the potential misinterpretations that may
arise from the site‘s current presentation of listing data.
Regarding consistency of listings, EPA noted that it recently
distributed to the states and regions its 2002 Integrated Water Quality
Monitoring and Assessment Report guidance. EPA expects this guidance to
reduce the inconsistencies in state practices for monitoring their
waters, characterizing their water quality standards attainment status,
and identifying those waters requiring the establishment of TMDLs. EPA
also pointed out that the states‘ development of integrated reports
will provide a much clearer summary of the quality of the nation‘s
waters. While we agree that the integrated report will provide a useful
summary of states‘ water quality and will likely reduce inconsistencies
in how they report on the quality of their waters, we do not believe
that the integrated reporting guidance will help significantly in
reducing inconsistencies in states‘ approaches for identifying impaired
waters. In particular, the guidance does not address the key functions
that most influence how states interpret their water quality standards,
monitor their waters, and assess the water quality data used to
identify impaired waters.
On the other hand, EPA‘s draft Consolidated Assessment and Listing
Methodologies guidance (CALM) has the potential to more directly
address sources of inconsistency. Specifically, the guidance seeks to
encourage states to improve their assessment and listing methodologies
and, in the longer term, strengthen their monitoring programs. The
guidance also has the potential to address inconsistencies in states‘
water quality monitoring and assessment practices, and in how they
describe their approaches through the methodologies they submit to EPA
along with their 303(d) lists. However, as of December 2001, the CALM
guidance had not yet been published.
EPA did not comment directly on our recommendation that it should work
with its regional offices to ensure a more consistent interpretation of
the agency‘s policies on removing waters from their 303(d) lists. We
note, however, that the need for consistent regional interpretation of
the agency‘s delisting guidance will grow significantly in the future
under the agency‘s new Integrated Listing guidance. Specifically, only
the fifth of five categories of waters in EPA‘s new categorization
process is considered to be the 303(d) list. EPA expects that states
will transfer waters from this category to other categories, with
significant implications for which state waters will be targeted for
TMDL development. As such, it will be essential that EPA‘s guidance on
these decisions be interpreted consistently from one region to another.
EPA also did not comment directly on our recommendation that it should
work with states to help resolve discrepancies that arise in the
listing of interstate waters.
Regarding our recommendation concerning the potential misinterpretation
by users of listing information on EPA‘s impaired waters Web site, EPA
noted that it will continue to assist states in georeferencing their
waters to document impairments in a consistent manner and that it will
continue to update the WATERS database. In addition, EPA‘s Integrated
Listing guidance recommends that states use one standard format for
physically defining all of their waters. These efforts should help to
increase the consistency of reporting the size and number of impaired
waters in future lists. However, until the inconsistencies in states‘
approaches are resolved, the reporting of impaired waters will continue
to be highly variable. For this reason, we continue to recommend that
EPA explain to users the potential misinterpretations that may arise
from the current presentation of the data.
In its letter dated December 13, 2001, the Department of the Interior
said that our draft report ’covered a complicated and detailed topic
well“ and that ’many of the contributing factors to inconsistent state
perspectives on water quality conditions are carefully identified....“
The letter included a number of technical comments and suggestions from
the department‘s U.S. Geological Survey, which have been incorporated
as appropriate (see appendix II).
Scope and Methodology:
To identify and assess the effects of any differences in states‘
approaches to identifying impaired waters, we conducted a telephone
survey of the state officials responsible for developing such lists of
impaired waters for 15 randomly selected states. We also reviewed and
analyzed the written methodologies that each of the 50 states and the
District of Columbia submitted to EPA. The methodologies are prepared
by the states to explain the methods they use to decide whether waters
are impaired. In addition, we identified several instances of waters
that share state boundaries and appeared to be inconsistently listed by
the states. We discussed these examples with EPA headquarters and
regional officials to determine the reasons for the apparently
inconsistent listings.
To determine how states ensure the quality of the data used to identify
impaired waters, we first reviewed EPA‘s quality assurance guidance to
determine what is required of states. We included questions on the
quality assurance procedures that states use in our 15-state survey of
state water quality officials. We also interviewed appropriate
officials at 9 of 10 EPA regional offices to determine what procedures
states in each region are following to ensure the quality of the data
used to create their lists. Finally, we reviewed data credibility
regulations written by two states and discussed them with state and
regional officials.
To assess the reliability of the information in EPA‘s database of
impaired waters, we took steps to determine the consistency,
completeness, and accuracy of this information. We reviewed EPA‘s
guidance for preparing the 303(d) report and other EPA guidance
relevant to the monitoring and assessment of waters. We requested EPA
to provide us specific data by state and examined the data for
completeness. To determine the accuracy of EPA‘s WATERS Web site and
other EPA sites based on the database, we requested the officials who
participated in our 15-state survey to look at their state information
and provide us with an assessment of how accurately the data were
portrayed. We also used the Web site to attempt to gather information
that would allow us to determine the nature and magnitude of the
nation‘s water quality problems, however, we were unable to do so. We
discussed these matters with EPA headquarters officials.
We conducted our work from April through November 2001 in accordance
with generally accepted government auditing standards.
As we agreed with your offices, unless you publicly announce the
contents of this report earlier, we plan no further distribution of it
until 30 days from the date of this letter. We will then send copies to
appropriate congressional committees and other interested parties and
make copies available to those who request them.
If you or your staff have any questions about this report, please call
me or Steve Elstein at (202) 512-3841. Key contributors to this report
are listed in appendix III.
Signed by:
John B. Stephenson:
Director, Natural Resources and Environment:
[End of section]
Appendix I: Status of States‘ Monitoring and Assessment of Their
Waters:
States use a variety of monitoring practices and assessment methods; as
a result, the percentage of waters monitored and assessed across states
varies greatly. States report the percentage that they have monitored
and assessed for (1) rivers and streams; (2) lakes, reservoirs, and
ponds; (3) estuaries; and (4) ocean shorelines. Because rivers,
streams, estuaries, and ocean shorelines are reported in miles, while
lakes, reservoirs, and ponds are reported in acres, the percentage for
each category is reported separately below, in table 1.
Table 1: Percentage of States‘ Waters Monitored and Assessed:
State: Alabama;
Percentage of rivers and streams assessed: 5%;
Percentage of lakes, reservoirs, and ponds assessed: 94%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 15%.
State: Alaska;
Percentage of rivers and streams assessed: 0%;
Percentage of lakes, reservoirs, and ponds assessed: 0%;
Percentage of estuaries assessed: 1%;
Percentage of ocean shorelines assessed: 0%.
State: Arizona;
Percentage of rivers and streams assessed: 5%;
Percentage of lakes, reservoirs, and ponds assessed: 22%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Arkansas;
Percentage of rivers and streams assessed: 10%;
Percentage of lakes, reservoirs, and ponds assessed: 69%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: California;
Percentage of rivers and streams assessed: 8%;
Percentage of lakes, reservoirs, and ponds assessed: 44%;
Percentage of estuaries assessed: 89%;
Percentage of ocean shorelines assessed: 57%.
State: Colorado;
Percentage of rivers and streams assessed: 27%;
Percentage of lakes, reservoirs, and ponds assessed: 36%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Connecticut;
Percentage of rivers and streams assessed: 16%;
Percentage of lakes, reservoirs, and ponds assessed: 42%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 0.
State: Delaware;
Percentage of rivers and streams assessed: 95%;
Percentage of lakes, reservoirs, and ponds assessed: 94%;
Percentage of estuaries assessed: 4%;
Percentage of ocean shorelines assessed: 100%.
State: District of Columbia;
Percentage of rivers and streams assessed: 98%;
Percentage of lakes, reservoirs, and ponds assessed: 100%;
Percentage of estuaries assessed: 97%;
Percentage of ocean shorelines assessed: [B].
State: Florida;
Percentage of rivers and streams assessed: 10%;
Percentage of lakes, reservoirs, and ponds assessed: 48%;
Percentage of estuaries assessed: 33%;
Percentage of ocean shorelines assessed: 0.
State: Georgia;
Percentage of rivers and streams assessed: 12%;
Percentage of lakes, reservoirs, and ponds assessed: 94%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 0.
State: Hawaii;
Percentage of rivers and streams assessed: 100%;
Percentage of lakes, reservoirs, and ponds assessed: 0%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 84%.
State: Idaho;
Percentage of rivers and streams assessed: 11%;
Percentage of lakes, reservoirs, and ponds assessed: 0%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Illinois;
Percentage of rivers and streams assessed: 33%;
Percentage of lakes, reservoirs, and ponds assessed: 61%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Indiana;
Percentage of rivers and streams assessed: 24%;
Percentage of lakes, reservoirs, and ponds assessed: 32%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Iowa;
Percentage of rivers and streams assessed: 14%;
Percentage of lakes, reservoirs, and ponds assessed: 52%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Kansas;
Percentage of rivers and streams assessed: 12%;
Percentage of lakes, reservoirs, and ponds assessed: 100%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Kentucky;
Percentage of rivers and streams assessed: 19%;
Percentage of lakes, reservoirs, and ponds assessed: 96%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Louisiana;
Percentage of rivers and streams assessed: 9%;
Percentage of lakes, reservoirs, and ponds assessed: 35%;
Percentage of estuaries assessed: 40%;
Percentage of ocean shorelines assessed: 0.
State: Maine;
Percentage of rivers and streams assessed: 100%;
Percentage of lakes, reservoirs, and ponds assessed: 100%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 0.
State: Maryland;
Percentage of rivers and streams assessed: 39%;
Percentage of lakes, reservoirs, and ponds assessed: 27%;
Percentage of estuaries assessed: 98%;
Percentage of ocean shorelines assessed: 100%.
State: Massachusetts;
Percentage of rivers and streams assessed: 18%;
Percentage of lakes, reservoirs, and ponds assessed: 56%;
Percentage of estuaries assessed: 8%;
Percentage of ocean shorelines assessed: 0.
State: Michigan;
Percentage of rivers and streams assessed: 40%;
Percentage of lakes, reservoirs, and ponds assessed: 55%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Minnesota;
Percentage of rivers and streams assessed: 13%;
Percentage of lakes, reservoirs, and ponds assessed: 77%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Mississippi;
Percentage of rivers and streams assessed: 47%;
Percentage of lakes, reservoirs, and ponds assessed: 58%;
Percentage of estuaries assessed: 28%;
Percentage of ocean shorelines assessed: 55%.
State: Missouri;
Percentage of rivers and streams assessed: 42%;
Percentage of lakes, reservoirs, and ponds assessed: 100%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Montana;
Percentage of rivers and streams assessed: 10%;
Percentage of lakes, reservoirs, and ponds assessed: 94%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Nebraska;
Percentage of rivers and streams assessed: 5%;
Percentage of lakes, reservoirs, and ponds assessed: 45%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Nevada;
Percentage of rivers and streams assessed: 1%;
Percentage of lakes, reservoirs, and ponds assessed: 60%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: New Hampshire;
Percentage of rivers and streams assessed: 24%;
Percentage of lakes, reservoirs, and ponds assessed: 95%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 100%.
State: New Jersey;
Percentage of rivers and streams assessed: 59%;
Percentage of lakes, reservoirs, and ponds assessed: 44%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 100%.
State: New Mexico;
Percentage of rivers and streams assessed: 4%;
Percentage of lakes, reservoirs, and ponds assessed: 15%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: New York;
Percentage of rivers and streams assessed: 100%;
Percentage of lakes, reservoirs, and ponds assessed: 100%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 100%.
State: North Carolina;
Percentage of rivers and streams assessed: 89%;
Percentage of lakes, reservoirs, and ponds assessed: 100%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 0.
State: North Dakota;
Percentage of rivers and streams assessed: 22%;
Percentage of lakes, reservoirs, and ponds assessed: 97%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Ohio;
Percentage of rivers and streams assessed: 10%;
Percentage of lakes, reservoirs, and ponds assessed: 0%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Oklahoma;
Percentage of rivers and streams assessed: 14%;
Percentage of lakes, reservoirs, and ponds assessed: 57%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Pennsylvania;
Percentage of rivers and streams assessed: 15%;
Percentage of lakes, reservoirs, and ponds assessed: 0%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Rhode Island;
Percentage of rivers and streams assessed: 54%;
Percentage of lakes, reservoirs, and ponds assessed: 75%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: 100%.
State: South Carolina;
Percentage of rivers and streams assessed: 65%;
Percentage of lakes, reservoirs, and ponds assessed: 58%;
Percentage of estuaries assessed: 32%;
Percentage of ocean shorelines assessed: 0.
State: South Dakota;
Percentage of rivers and streams assessed: 32%;
Percentage of lakes, reservoirs, and ponds assessed: 18%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Tennessee;
Percentage of rivers and streams assessed: 88%;
Percentage of lakes, reservoirs, and ponds assessed: 100%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Texas;
Percentage of rivers and streams assessed: 7%;
Percentage of lakes, reservoirs, and ponds assessed: 50%;
Percentage of estuaries assessed: 100%;
Percentage of ocean shorelines assessed: [B].
State: Utah;
Percentage of rivers and streams assessed: 10%;
Percentage of lakes, reservoirs, and ponds assessed: 96%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Vermont;
Percentage of rivers and streams assessed: 16%;
Percentage of lakes, reservoirs, and ponds assessed: 7%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Virginia;
Percentage of rivers and streams assessed: 39%;
Percentage of lakes, reservoirs, and ponds assessed: 93%;
Percentage of estuaries assessed: 99%;
Percentage of ocean shorelines assessed: 0.
State: Washington;
Percentage of rivers and streams assessed: 98%;
Percentage of lakes, reservoirs, and ponds assessed: 53%;
Percentage of estuaries assessed: 85%;
Percentage of ocean shorelines assessed: 0.
State: West Virginia;
Percentage of rivers and streams assessed: 24%;
Percentage of lakes, reservoirs, and ponds assessed: 96%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Wisconsin;
Percentage of rivers and streams assessed: 40%;
Percentage of lakes, reservoirs, and ponds assessed: 65%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
State: Wyoming;
Percentage of rivers and streams assessed: 87%;
Percentage of lakes, reservoirs, and ponds assessed: 0%;
Percentage of estuaries assessed: [A];
Percentage of ocean shorelines assessed: [A].
[A] State does not have estuaries or ocean shorelines.
[B] This information was not available.
Source: EPA‘s 305(b) report for 1998.
[End of table]
[End of section]
Appendix II: Comments From the Department of the Interior:
United States Department of the Interior:
Office Of The Secretary:
Washington, D.C. 20240:
December 13, 2001:
Mr. John B. Stephenson:
Director, Natural Resources and Environment:
United States General Accounting Office:
Washington, D.C. 20548:
Dear Mr. Stephenson:
Secretary Norton provided me a copy of your draft report entitled,
"Inconsistent State Approaches Complicate Nation's Efforts to Identify
Its Most Polluted Waters" (GAO-02-186) to review. As you know, staff of
the U.S. Geological Survey (USGS) were contacted by your office to
provide information.
While there are no recommendations for the USGS, the USGS staff has
provided the following comments:
Overall, we have found that you have covered a complicated and detailed
topic well. Many of the contributing factors to inconsistent State
perspectives on water-quality conditions are carefully identified, and
examples are given to clarify what you have understood. We think this
document will serve many of your readers well.
We have some specific comments that we hope will help you further
clarify the points you are making.
Page 2:
"Variation among the States stems from a combination of factors
including differences in the (1) water quality standards for
determining which waters...." All four of the points made are necessary
to understand the inconsistency between States on water-quality
conditions. One additional issue could be mentioned and that is the
determination States make of what the specific beneficial uses are.
Even if States use consistent methods for standards, monitoring,
assessment, and get consistent guidance on removal of listed reaches,
if they do not agree on the beneficial uses for water bodies, their
conclusions on impairment will be different. You point out several
examples (pages 17-19) where one State sees the water in the same river
as a drinking water source another State does not, leading to different
standards and perspectives.
Page 4, Footnote #2:
The note indicates that impaired waters that have or are expected to
have technological controls in place to meet standards do not need a
Total Maximum Daily Load (TMDL). If there is a time period over which
the impaired waters are expected to meet standards, thus avoiding the
need for a TMDL, please state what time frame is specified.
Page 5:
First full paragraph discusses when 303(d) lists were due (October 1,
2002), but, in August 2001, the effective date was postponed, allowing
18 months for public review of revised regulations. Is there a
currently identified due date for the 303(d) lists? Will the new
November 2001 guidance state what the due date is to be?
Page 9:
In the last paragraph: "USGS officials recommend that States utilize
all three types of monitoring to help ensure that water quality
conditions are adequately characterized." Consider adding: USGS
officials suggested that, although the endpoint of monitoring for
States may be the biological condition, if a State decides to obtain a
different biological outcome than the one they find, a new biological
condition will be obtained by adjusting the physical or chemical
conditions.
Page 10:
In the figure, consider adding a few additional measures that are key
to monitoring. For example: Under biological, indicator bacteria; under
physical, flow; and under chemical, dissolved oxygen. These additions
point to two of the frequent reasons for 303(d) listing of streams
(bacteria, oxygen) along with suspended solids and nutrients already
listed. Flow is important because flow conditions can identify times
when contaminants have been either diluted or concentrated at a stream
site. Also, flow information is ultimately required to establish TMDLs.
Page 11, Footnote #7:
Consider that probability-based monitoring is important to efficiently
identify the overall condition of waters in a State as stated; however,
while the results will provide the percentage of all waters in the
State that exceed a criteria, the probability monitoring will not
identify specifically where the reaches are that exceed the criteria.
Thus, both probability and targeted monitoring are needed for 305(b)
and 303(d) requirements.
Page 23:
First paragraph: "According to USGS officials, the purpose of its
stringent procedures is...." Consider that statement could be
rewritten: According to USGS officials, the purpose of its stringent
procedures is to discover the specific amounts of trace metals in a
water body depicting current conditions and allowing for quantification
of water-quality time trends. On the other hand, States may only need
to know if their standards or criteria are met, and those criteria
levels may be much higher than the actual ambient concentrations
measured by the USGS methods.
Page 27:
In the paragraph discussing the accuracy of data in the TMDL database
and WATERS, consider that, in addition to the reasons stated in this
paragraph for inconsistency between States listings, you also
demonstrate on pages 17-19 that one of the additional reasons for
variations between States is that States designate different beneficial
uses for waters. The different uses will lead to variations between
States in listing waters even if they use consistent methods for
monitoring, criteria setting, and analysis for impairment decisions.
Thank you for the opportunity to review and to comment on the draft
report before it is finalized.
Sincerely,
Signed by:
R. Thomas Weiner, for:
Bennett W. Raley:
Assistant Secretary for Water and Science:
[End of section]
Appendix III: GAO Contacts and Staff Acknowledgments:
GAO Contacts:
John B. Stephenson (202) 512-3841:
Steve Elstein (202) 512-6515:
Staff Acknowledgments:
In addition to those named above, Aaron Alton, Susan E. Iott, Nathan A.
Morris, and Barbara L. Patterson made key contributions to this report.
Also contributing to this report were Nancy Crothers, Barbara Johnson,
Karen Keegan, Trish McClure, and Cynthia Norris.
[End of section]
Footnotes:
[1] Water Quality: Key EPA and State Decisions Limited by Inconsistent
and Incomplete Data [hyperlink,
http://www.gao.gov/products/GAO/RCED-00-54], Mar. 15, 2000.
[2] EPA revised these regulations in 1992 to make the list a biennial
requirement.
[3] National Research Council, Assessing the TMDL Approach to Water
Quality Management (Washington, D.C.: National Academy Press, 2001).
[4] EPA issued guidance for numeric nutrient criteria in October 2001.
Wisconsin officials told us that the number of waters on their 303(d)
list would increase by approximately 10 percent if they switched to
this guidance from the narrative criteria they currently use.
[5] Chris Yoder and Edward T. Rankin, ’The Role of Biological
Indicators in a State Water Quality Management Process,“ Environmental
Monitoring and Assessment, vol. 51 (1998), pp. 61-88.
[6] Because monitoring all waters in a state is prohibitively
expensive, states generally choose sites to monitor either on a
targeted basis or on a random basis”called probability-based
monitoring. Currently, many states use a targeted approach to monitor
their waters, which means that monitoring points are selected
judgmentally or for a purpose. The points can be placed either in a
fixed fashion or can be done by rotating basin, which involves the
state monitoring and assessing a portion of its watersheds each year in
a rotating fashion. With targeted sampling, unless complete coverage
can be achieved, the data cannot be used to draw conclusions about the
extent to which the state‘s entire inventory of waters is attaining
water quality standards. Probability-based monitoring involves placing
monitoring points in a statistically random pattern, which allows the
state to reach conclusions about the status of all its waters. EPA
guidance encourages states to incorporate probability-based monitoring
into their monitoring practices. Thirty states are experimenting with
probability-based assessments, with six states already using them.
However, while the results will provide a percentage of all waters in
the state that exceed criteria, probability monitoring does not
identify the location of specific segments of water that exceed
criteria. Thus, both probability and targeted monitoring are needed for
305(b) and 303(d) reporting.
[7] 40 CFR 130.7 (b)(6)(iv).
[8] Waters that are impaired but do not need a TMDL may include those
for which TMDLs have been completed and those for which the states plan
additional actions that will improve the waters.
[9] Data are deemed to be ’reliable“ if they are sufficiently complete
and error free to be convincing for their purpose and context.
[End of section]
GAO‘s Mission:
The General Accounting Office, the investigative arm of Congress,
exists to support Congress in meeting its constitutional
responsibilities and to help improve the performance and accountability
of the federal government for the American people. GAO examines the use
of public funds; evaluates federal programs and policies; and provides
analyses, recommendations, and other assistance to help Congress make
informed oversight, policy, and funding decisions. GAO‘s commitment to
good government is reflected in its core values of accountability,
integrity, and reliability.
Obtaining Copies of GAO Reports and Testimony:
The fastest and easiest way to obtain copies of GAO documents at no
cost is through the Internet. GAO‘s Web site [hyperlink,
http://www.gao.gov] contains abstracts and full text files of current
reports and testimony and an expanding archive of older products. The
Web site features a search engine to help you locate documents using
key words and phrases. You can print these documents in their entirety,
including charts and other graphics.
Each day, GAO issues a list of newly released reports, testimony, and
correspondence. GAO posts this list, known as ’Today‘s Reports,“ on its
Web site daily. The list contains links to the full-text document
files. To have GAO e-mail this list to you every afternoon, go to
[hyperlink, http://www.gao.gov] and select ’Subscribe to daily E-mail
alert for newly released products“ under the GAO Reports heading.
Order by Mail or Phone:
The first copy of each printed report is free. Additional copies are $2
each. A check or money order should be made out to the Superintendent
of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or
more copies mailed to a single address are discounted 25 percent.
Orders should be sent to:
U.S. General Accounting Office:
441 G Street NW, Room LM:
Washington, D.C. 20548:
To order by Phone:
Voice: (202) 512-6000:
TDD: (202) 512-2537:
Fax: (202) 512-6061:
To Report Fraud, Waste, and Abuse in Federal Programs Contact:
Web site: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]:
E-mail: fraudnet@gao.gov:
Automated answering system: (800) 424-5454 or (202) 512-7470:
Public Affairs:
Jeff Nelligan, managing director, NelliganJ@gao.gov:
(202) 512-4800:
U.S. General Accounting Office:
441 G Street NW, Room 7149:
Washington, D.C. 20548: