Aviation Safety
Information on FAA's Data on Operational Errors At Air Traffic Control Towers
Gao ID: GAO-03-1175R September 23, 2003
A fundamental principle of aviation safety is the need to maintain adequate separation between aircraft and to ensure that aircraft maintain a safe distance from terrain, obstructions, and airspace that is not designated for routine air travel. Air traffic controllers employ separation rules and procedures that define safe separation in the air and on the ground.1 An operational error occurs when the separation rules and procedures are not followed due to equipment or human error. Data maintained by the Federal Aviation Administration (FAA) indicate that a very small number of operational errors occur in any given year--on average about three operational errors per day occurred in fiscal year 2002. However, some of these occurrences can pose safety risks by directing aircraft onto converging courses and, potentially, midair collisions. Congress asked us to provide information on FAA's data on operational errors and whether this data can be used to identify types of air traffic control facilities with greater safety risks. Specifically, we were asked to (1) determine what is known about the reliability and validity of the data that FAA maintains on operational errors and (2) identify whether comparisons of operational errors among air traffic control facilities can be used to determine the facilities' relative safety record.
We identified several potential limitations with FAA's data on operational errors based on our review of issued GAO and DOT reports and application of best methodological practices. First, it is very difficult to determine the completeness of the data. FAA collects data on operational errors from two sources--self-reporting by air traffic controllers and automatic reports of errors detected on the en route portion of a flight. The possibility exists for underreporting by air traffic controllers, since some errors are self-reported and some air traffic controllers may not self-report every incident. Second, due to the way the data are recorded, the severity of many errors cannot be determined or is misleading. Prior to 2001, minor errors, such as establishing a 4.5-mile rather than a 5-mile separation, were counted in the same way as more serious errors, according to DOT. In 2001, DOT began to address this issue by establishing a rating system to identify the severity of, or collision hazard posed by, operational errors. The system uses a 100-point scale to rate and categorize operational errors as high, moderate, or low severity. However, in 2003, DOT's IG reported continuing concerns with FAA's data on operational errors. The IG noted that the new rating system provides misleading information and that FAA needs to modify the system to more accurately identify the most serious operational errors. The DOT IG found that in one instance FAA rated an operational error as moderate that was less than 12 seconds from becoming a midair collision. The IG believed that this operational error should have been rated as high severity. The IG also reported that FAA cannot be sure that air traffic controllers report all operational errors. Comparisons of operational errors among types of air traffic control facilities, such as FAA staffed facilities versus contractor-staffed facilities, cannot be used alone to provide valid conclusions about safety due to three factors that we identified based on standard methodological practices and our understanding of FAA's data. First, such problems as the completeness and specificity of data on operational errors are likely to affect the validity of comparisons among air traffic control facilities because operational errors may not be comparably reported at the types of facilities being compared. Because of these factors, the determination of real differences in the rate of operational errors between different types of air traffic control facilities is difficult, and comparisons of operational error rates alone are not sufficient to draw conclusions about the relative safety records of air traffic control facilities. At a minimum, the additional factors mentioned above would need to be considered and analyzed with a technique that models the occurrence of rare events and looks at these events over time. This approach, however, is not without risk and would depend upon the existence of proper and reliable data on operational error rates, operating conditions at the towers at the time the error occurred, and other factors that may be associated with operational errors. Such an approach would allow for a more meaningful comparison of facilities' operational errors through ascertaining and accounting for the multiple factors that may be associated with such errors.
GAO-03-1175R, Aviation Safety: Information on FAA's Data on Operational Errors At Air Traffic Control Towers
This is the accessible text file for GAO report number GAO-03-1175R
entitled 'Aviation Safety: Information on FAA's Data on Operational
Errors At Air Traffic Control Towers' which was released on September
24, 2003.
This text file was formatted by the U.S. General Accounting Office
(GAO) to be accessible to users with visual impairments, as part of a
longer term project to improve GAO products' accessibility. Every
attempt has been made to maintain the structural and data integrity of
the original printed product. Accessibility features, such as text
descriptions of tables, consecutively numbered footnotes placed at the
end of the file, and the text of agency comment letters, are provided
but may not exactly duplicate the presentation or format of the printed
version. The portable document format (PDF) file is an exact electronic
replica of the printed version. We welcome your feedback. Please E-mail
your comments regarding the contents or accessibility features of this
document to Webmaster@gao.gov.
This is a work of the U.S. government and is not subject to copyright
protection in the United States. It may be reproduced and distributed
in its entirety without further permission from GAO. Because this work
may contain copyrighted images or other material, permission from the
copyright holder may be necessary if you wish to reproduce this
material separately.
September 23, 2003:
The Honorable James L. Oberstar:
Ranking Democratic Member:
Committee on Transportation and Infrastructure:
U.S. House of Representatives:
Subject: Aviation Safety: Information on FAA's Data on Operational
Errors at Air Traffic Control Towers:
A fundamental principle of aviation safety is the need to maintain
adequate separation between aircraft and to ensure that aircraft
maintain a safe distance from terrain, obstructions, and airspace that
is not designated for routine air travel. Air traffic controllers
employ separation rules and procedures that define safe separation in
the air and on the ground.[Footnote 1] An operational error occurs when
the separation rules and procedures are not followed due to equipment
or human error. Data maintained by the Federal Aviation Administration
(FAA) indicate that a very small number of operational errors occur in
any given year--on average about three operational errors per day
occurred in fiscal year 2002. However, some of these occurrences can
pose safety risks by directing aircraft onto converging courses and,
potentially, midair collisions.
You asked us to provide information on FAA's data on operational errors
and whether this data can be used to identify types of air traffic
control facilities with greater safety risks. Specifically, you asked
us to (1) determine what is known about the reliability and
validity[Footnote 2] of the data that FAA maintains on operational
errors and (2) identify whether comparisons of operational errors among
air traffic control facilities can be used to determine the facilities'
relative safety record.
To answer these objectives, we reviewed past GAO studies[Footnote 3]
and reports by the Department of Transportation (DOT) and DOT's
Inspector General (IG) that pertain to FAA's data on operational errors
and applied standard methodological practices for data reliability,
validity, and analysis.[Footnote 4]
Data Has Reliability and Validity Limitations:
We identified several potential limitations with FAA's data on
operational errors based on our review of issued GAO and DOT reports
and application of best methodological practices. First, it is very
difficult to determine the completeness of the data. FAA collects data
on operational errors from two sources--self-reporting by air traffic
controllers and automatic reports of errors detected on the en route
portion of a flight. The possibility exists for underreporting by air
traffic controllers, since some errors are self-reported and some air
traffic controllers may not self-report every incident. Second, due to
the way the data are recorded, the severity of many errors cannot be
determined or is misleading. Prior to 2001, minor errors, such as
establishing a 4.5-mile rather than a 5-mile separation, were counted
in the same way as more serious errors, according to DOT.[Footnote 5]
In 2001, DOT began to address this issue by establishing a rating
system to identify the severity of, or collision hazard posed by,
operational errors. The system uses a 100-point scale to rate and
categorize operational errors as high, moderate, or low severity.
However, in 2003, DOT's IG reported continuing concerns with FAA's data
on operational errors.[Footnote 6] The IG noted that the new rating
system provides misleading information and that FAA needs to modify the
system to more accurately identify the most serious operational errors.
The DOT IG found that in one instance FAA rated an operational error as
moderate that was less than 12 seconds from becoming a midair
collision. The IG believed that this operational error should have been
rated as high severity. The IG also reported that FAA cannot be sure
that air traffic controllers report all operational errors.
Comparison of Operational Errors Alone Does Not Provide Valid
Conclusions About Safety of Air Traffic Control Facilities:
Comparisons of operational errors among types of air traffic control
facilities, such as FAA-staffed facilities versus contractor-staffed
facilities, cannot be used alone to provide valid conclusions about
safety due to three factors that we identified based on standard
methodological practices and our understanding of FAA's data. First,
such problems as the completeness and specificity of data on
operational errors are likely to affect the validity of comparisons
among air traffic control facilities because operational errors may not
be comparably reported at the types of facilities being compared. For
example, as we mentioned above, FAA cannot be sure that all operational
errors at either FAA-staffed or contractor-staffed towers were
reported. When such a situation exists, it is difficult, if not
impossible, to determine whether the comparative results are valid or
are an artifact of under-reporting at one or both types of air traffic
control facilities. Second, in order to make valid comparisons a number
of factors that might affect the rate of operational errors would need
to be accounted for in an analysis. For example, air traffic density,
other operating conditions such as the number of flights, age and
experience of air traffic controllers, and weather conditions at the
time the error occurred all might influence operational errors. These
factors would have to be accounted for in any analysis comparing
operational errors among different types of facilities in order to
determine if the errors are associated with something other than the
type of air traffic control facility. Finally, as previously mentioned,
a very small number of operational errors occur in any given year (6.7
operational errors per million operations, on average, across all FAA
towers in fiscal year 2002), which may make it difficult to detect any
real differences in the error rates among facilities.
Because of these factors, the determination of real differences in the
rate of operational errors between different types of air traffic
control facilities is difficult, and comparisons of operational error
rates alone are not sufficient to draw conclusions about the relative
safety records of air traffic control facilities. At a minimum, the
additional factors mentioned above would need to be considered and
analyzed with a technique that models the occurrence of rare events and
looks at these events over time. This approach, however, is not without
risk and would depend upon the existence of proper and reliable data on
operational error rates, operating conditions at the towers at the time
the error occurred, and other factors that may be associated with
operational errors. Such an approach would allow for a more meaningful
comparison of facilities' operational errors through ascertaining and
accounting for the multiple factors that may be associated with such
errors.
As arranged with your office, unless you publicly announce its contents
earlier, we plan no further distribution of this report until seven
days after the date of this report. At that time, we will send copies
of this report to interested congressional committees. The report will
also be available on GAO's home page at http://www.gao.gov. If you have
any questions about this report, please contact me at (202) 512-2834 or
by e-mail at dillinghamg@gao.gov. Key contributors to this assignment
are Isidro Gomez, Brandon Haller, Teresa Spisak, and Alwynne Wilbur.
Sincerely yours,
Gerald L. Dillingham:
Director, Civil Aviation Issues:
Signed by Gerald L. Dillingham:
(540075):
FOOTNOTES
[1] The Federal Aviation Administration (FAA) has established a
separation standard in the en route environment of 5 nautical miles
horizontally and either 1,000 or 2,000 feet vertically depending on
altitude. In the terminal environment, horizontal separation is
generally between 3 and 5 nautical miles depending on the type of
aircraft.
[2] Data reliability refers to the accuracy and completeness of data.
We define data as reliable when they are (1) complete and (2) accurate.
Reliability does not mean that data are error free, but that the data
is sufficient for the intended purposes. Validity refers to whether the
data actually represent what one thinks is being measured. See U.S.
General Accounting Office, Assessing the Reliability of Computer-
Processed Data, GAO-02-15G (Washington, D.C.: Sept. 2002).
[3] See, for example, U.S. General Accounting Office, Air Traffic
Control: FAA Enhanced the Controller-in-Charge Program, but More
Comprehensive Evaluation Is Needed, GAO-02-55 (Washington, D.C.: Oct.
31, 2001).
[4] See GAO-02-15G; U.S. General Accounting Office, Government Auditing
Standards, GAO-03-673G (Washington, D.C.: June 2003); and GAO Policy
and Procedures Manual, Factors Affecting a Design's Credibility.
[5] U.S. Department of Transportation, Performance Report Fiscal Year
2000, Performance Plan, Fiscal Year 2002 (Washington, D.C.: April
2001).
[6] U.S. Department of Transportation, Office of Inspector General, Top
Management Challenges, Department of Transportation, PT-2003-012
(Washington, D.C.: Jan. 21, 2003) and Safety, Cost, and Operational
Metrics of the Federal Aviation Administration's Visual Flight Rule
Towers, AV-2003-057 (Washington, D.C.: Sept. 4, 2003).