Program Evaluation

Studies Helped Agencies Measure or Explain Program Performance Gao ID: GGD-00-204 September 29, 2000

This report focuses on the need of congressional and federal agency decision-makers to evaluate information about how well federal programs are working, both to manage programs effectively and to help decide how to allocate limited federal resources. The evaluations helped the agencies improve their measurement of program performance or understanding of performance and how it might be improved--some studies did both. Four agencies drew on evaluations to explain the reasons for observed performance or identify ways to improve performance. Three agencies compared their program's results with estimates of what might have happened in the program's absence in order to assess their program's net impact or contribution to results. In other cases, agencies initiated special studies because they faced challenges in collecting outcome data on an ongoing basis. One department wide study was initiated in order to direct attention to an issue that cut across program boundaries and agencies' responsibilities.

GAO noted that: (1) evaluations helped the agencies improve their measurement of program performance or understanding of performance and how it might be improved--some studies did both; (2) to help improve their performance measurement, two agencies used the findings of effectiveness evaluations to provide data on program results that were otherwise unavailable; (3) one agency supported a number of studies to help states prepare the groundwork for and pilot-test future performance measures; (4) another used evaluation methods to validate the accuracy of existing performance data; (5) to better understand program performance, one agency reported evaluation and audit findings to address other, operational concerns about the program; (6) four agencies drew on evaluations to explain the reasons for observed performance or identify ways to improve performance; (7) three agencies compared their program's results with estimates of what might have happened in the program's absence in order to assess their program's net impact or contribution to results; (8) two of the evaluations GAO reviewed were initiated in response to legislative provisions, but most of the studies were self-initiated by agencies in response to concerns about the program's performance or about the availability of outcome data; (9) some studies were initiated by agencies for reasons unrelated to meeting Government Performance and Results Act requirements and thus served purposes beyond those they were designed to address; (10) in some cases, evaluations were launched to identify the reasons for poor program performance and learn how that could be remedied; (11) in other cases, agencies initiated special studies because they faced challenges in collecting outcome data on an ongoing basis; (12) one departmentwide study was initiated in order to direct attention to an issue that cut across program boundaries and agencies' responsibilities; (13) as agencies governmentwide update their strategic and performance plans, the examples in this report might help them identify ways that evaluations can contribute to understanding their programs' performance; and (14) these cases also provide some examples of ways agencies might leverage their evaluation resources through: (a) drawing on the findings of a wide array of evaluations and audits; (b) making multiple use of an evaluations findings; (c) mining existing databases; and (d) collaborating with state and local program partners to develop mutually useful performance data.



The Justia Government Accountability Office site republishes public reports retrieved from the U.S. GAO These reports should not be considered official, and do not necessarily reflect the views of Justia.