Nursing Homes

Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature Gao ID: GAO-03-187 October 31, 2002

GAO was asked to review the Centers for Medicare & Medicaid Services (CMS) initiative to publicly report additional information on its "Nursing Home Compare" Web site intended to help consumers choose a nursing home. GAO examined CMS's development of the new nursing home quality indicators and efforts to verify the underlying data used to calculate them. GAO also reviewed the assistance CMS offered the public in interpreting and comparing indicators available in its six-state pilot program, launched in April 2002, and its own evaluation of the pilot. The new indicators are scheduled to be used nationally beginning in November 2002.

CMS's initiative to augment existing public data on nursing home quality has considerable merit, but its planned November 2002 implementation does not allow sufficient time to ensure the indicators are appropriate and useful to consumers. CMS's plan urges consumers to consider nursing homes with positive quality indicator scores, in effect, attempting to use market forces to encourage nursing homes to improve the quality of care. However, CMS is moving forward without adequately resolving important open issues on the appropriateness of the indicators chosen for national reporting or the accuracy of the underlying data. To develop and help select the quality indicators, CMS hired two organizations with expertise in health care data--Abt Associates, Inc. and the National Quality Forum (NQF). Abt identified a list of potential quality indicators and tested them to verify that they represented the actual quality of care individual nursing homes provide. GAO's review of the available portions of the report raised serious questions about the basis for moving forward with national reporting at this time. NQF, which was created to develop and implement a national strategy for measuring health care quality, was hired to review Abt's work and identify core indicators for national reporting. To allow sufficient time to review Abt's validation report, NQF agreed to delay its recommendations for national reporting until 2003. CMS limited its own evaluation of its six-state pilot program for the initiative so that the November 2002 implementation date could be met. Early results were expected in October 2002, leaving little time to incorporate them into the national rollout. Despite the lack of a final report from NQF and an incomplete pilot evaluation. CMS has announced a set of indicators it will begin reporting nationally in November 2002. GAO has serious concerns about the potential for public confusion by the quality information published, especially if there are significant changes to the quality indicators due to the NQF's review. CMS's proposed reporting format implies a precision in the data that is lacking at this time. While acknowledging this problem, CMS said it prefers to wait until after the national rollout to modify the presentation of the data. GAO's analysis of data currently available from the pilot states demonstrated there was ample opportunity for the public to be confused, highlighting the need for clear descriptions of the data's limitations and easy access to impartial experts hired by CMS to operate consumer hotlines. CMS has not yet demonstrated its readiness to meet these consumer needs either directly or through the hotlines fielding public questions about confusing or conflicting quality data. CMS acknowledged that further work is needed to refine its initiative, but believes that its indicators are sufficiently valid, reliable, and accurate to move forward with national implementation in November 2002 as planned.

Recommendations

Our recommendations from this work are listed below with a Contact for more information. Status will change from "In process" to "Open," "Closed - implemented," or "Closed - not implemented" based on our follow up work.

Director: Team: Phone:


GAO-03-187, Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature This is the accessible text file for GAO report number GAO-03-187 entitled 'Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature' which was released on October 31, 2002. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. Report to Congressional Requesters: United States Government Accounting Office: GAO: October 2002: Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature: GAO-03-187: GAO Highlights: Highlights of GAO-03-187, a report to congressional requesters. Why GAO Did This Study: GAO was asked to review the Centers for Medicare & Medicaid Services (CMS) initiative to publicly report additional information on its ’Nursing Home Compare“ Web site that is intended to help consumers choose a nursing home. GAO examined CMS‘s development of the new nursing home quality indicators and efforts to verify the underlying data used to calculate them. GAO also reviewed the assistance CMS offered the public in interpreting and comparing indicators available in its six-state pilot program, launched in April 2002, and its own evaluation of the pilot. The new indicators are scheduled to be used nationally beginning in November 2002. What GAO Found: CMS‘s initiative to augment existing public data on nursing home quality has considerable merit, but its planned November 2002 implementation does not allow sufficient time to ensure the indicators it publishes are appropriate and useful to consumers. CMS‘s plan urges consumers to consider nursing homes with positive quality indicator scores, in effect, attempting to use market forces to encourage nursing homes to improve the quality of care. However, CMS is moving forward without adequately resolving a number of important open issues on the appropriateness of the indicators chosen for national reporting or the accuracy of the underlying data. To develop and help select the quality indicators, CMS hired two organizations with expertise in health care data”Abt Associates, Inc. and the National Quality Forum (NQF). Abt identified a list of potential quality indicators and tested them to verify that they represented the actual quality of care individual nursing homes provide. Although the full Abt report on validation of the indicators was not available as of October 28, 2002, GAO‘s review of the available portions of the report raised serious questions about the basis for moving forward with national reporting at this time. NQF, which was created to develop and implement a national strategy for measuring health care quality, was hired to review Abt‘s work and identify core indicators for national reporting. To allow sufficient time to review Abt‘s validation report, NQF agreed to delay its recommendations for national reporting until 2003. CMS limited its own evaluation of its six-state pilot program for the initiative so that the November 2002 implementation date could be met. Early results were expected in October 2002, leaving little time to incorporate them into the national rollout. Despite the lack of a final report from NQF and an incomplete pilot evaluation, CMS has announced a set of indicators it will begin reporting nationally in November 2002. GAO has serious concerns about the potential for public confusion by the quality information published, especially if there are significant changes to the quality indicators due to the NQF‘s review. CMS‘s proposed reporting format implies a precision in the data that is lacking at this time. While acknowledging this problem, CMS said it prefers to wait until after the national rollout to modify the presentation of the data. GAO‘s analysis of data currently available from the pilot states demonstrated there was ample opportunity for the public to be confused, highlighting the need for clear descriptions of the data‘s limitations and easy access to impartial experts hired by CMS to operate consumer hotlines. CMS has not yet demonstrated its readiness to meet these consumer needs either directly or through the hotlines fielding public questions about confusing or conflicting quality data. CMS acknowledged that further work is needed to refine its initiative, but believes that its indicators are sufficiently valid, reliable, and accurate to move forward with national implementation in November 2002 as planned. What GAO Recommends: GAO is recommending that the Administrator of CMS delay the national reporting of quality indicators to allow sufficient time to resolve important issues regarding appropriate indicators for public reporting and to implement a program to review the accuracy of the data on which the indicators are based. During this time, CMS also should more thoroughly evaluate the results of its six-state pilot to assess how information is presented and to improve assistance available through consumer hotlines. [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-03-187]. To view the full report, including the scope and methodology, click on the link above. For more information, contact Kathryn G. Allen at (202) 512-7118. [End of section] Contents: Letter: Results in Brief: Background: Appropriateness of Quality Indicators Proposed for Public Reporting Is Unresolved: CMS Has Not Addressed Concerns About the Underlying Accuracy of MDS Data Used to Develop Quality Indicators: The Public May Be Confused by Quality Data and CMS Is Not Prepared to Respond to Consumers‘ Questions: Pilot Evaluation Is Limited and Will Not Be Completed Prior to National Reporting of Quality Indicators: Conclusions: Recommendations for Executive Action: Comments from CMS and the NQF and Our Evaluation: Appendixes: Appendix I: Comparison of Quality Indicators Proposed by NQF and CMS for National Rollout: Appendix II: Comments from the Centers for Medicare & Medicaid Services: Appendix III: Comments from the National Quality Forum: Appendix IV: GAO Contact and Staff Acknowledgments: Tables: Table 1: Error Rates in MDS Items Used to Develop Selected Quality Indicators: Table 2: Percentage of Nursing Homes in the Six Pilot States with Missing Quality Indicator Scores: Table 3: Comparison of Publicly Reported Nursing Home Quality Indicator Scores and Quality-of-Care Survey Deficiencies in Six Pilot States: Figure: Figure 1: Error Rates in MDS Assessments in 30 Nursing Homes Reviewed by Abt: Abbreviations: CMS: Centers for Medicare & Medicaid Services: HCFA: Health Care Financing Administration: HHS: Department of Health and Human Services: MDS: minimum data set: NQF: National Quality Forum: OIG: Office of Inspector General: QIO: Quality Improvement Organization: [End of section] United States General Accounting Office: Washington, D.C. 20548: October 31, 2002: The Honorable Charles E. Grassley: Ranking Minority Member: Committee on Finance: United State Senate: The Honorable Christopher S. Bond: United State Senate: Almost half of all Americans over the age of 65 will spend time in a nursing home at some point in their lives. A series of congressional hearings since 1998 has focused considerable attention on the unacceptably high number of nursing homes with repeated, serious care problems that harmed residents or placed them at risk of death or serious injury. Given the large number of nursing home residents and the growing public concerns over quality-of-care problems, the Centers for Medicare & Medicaid Services (CMS) has shown a strong commitment to providing assistance to individuals and their families in choosing a nursing home. CMS is the agency within the Department of Health and Human Services (HHS) that manages Medicare and Medicaid and oversees compliance with federal nursing home quality standards. [Footnote 1] In 1998, the agency launched a Web site” ’Nursing Home Compare“”that has progressively expanded the availability of public information on nursing homes and the quality of care provided. [Footnote 2] Initially, it posted data on deficiencies identified during routine state nursing home inspections, known as surveys. Data were later added on resident characteristics, such as the percentage of residents with pressure sores or physical restraints, nursing staff levels, and deficiencies found during state investigations of complaints. In November 2001, CMS announced a 12-month timeline for an initiative to (1) augment existing public data on nursing home quality, and (2) provide assistance to nursing homes to help improve their quality of care. In addition to the valuable data already available on its Web site, CMS proposed including newly developed quality indicators that permit a fairer comparison across homes by adjusting for differences in residents‘ health characteristics. Quality indicators are essentially numeric warning signs of problems, such as more frequent than expected pressure sores among nursing home residents. They are based on data from facility-reported assessments”known as the minimum data set (MDS)”conducted at established intervals during each resident‘s nursing home stay. The initiative also envisioned a new role for Medicare Quality Improvement Organizations (QIO): engaging in partnership building and local promotional activities designed to put quality information into the hands of consumers and working with nursing homes on a voluntary basis to help improve quality of care. [Footnote 3] In April 2002, CMS launched a six-state pilot to refine the initiative before planned nationwide implementation in November 2002. The six pilot states are Colorado, Florida, Maryland, Ohio, Rhode Island, and Washington. Medicare QIOs are working with from 6 to 11 nursing homes in each pilot state on projects including improving pain management and preventing pressure sores. In view of the importance of CMS‘s quality indicator initiative and the relatively short pilot time frame prior to national implementation, you asked us to review the (1) development of the new quality indicators for public reporting, (2) status of CMS‘s efforts to ensure the accuracy of the underlying data used to calculate the quality indicators, (3) assistance offered by CMS to the public in understanding the new quality indicator data, and (4) results of CMS‘s evaluation of the pilot. To do so, we reviewed pertinent documents from CMS on the development of the new quality indicators, the approaches identified to adjust for differences in residents‘ characteristics in each facility, the validation of the new indicators, the operation and evaluation of the pilot, and the role of the QIOs. We discussed these areas with CMS officials and with researchers from Abt Associates, Inc., the lead CMS contractor responsible for the development and validation of the new quality indicators. We also interviewed and reviewed materials provided by officials of the National Quality Forum (NQF), a group CMS contracted with to review Abt‘s work and provide recommendations on quality indicators for public reporting. [Footnote 4] We examined the consistency of both the quality indicator data available in the six pilot states and the extent of agreement between such data and the results of nursing home surveys. To determine how CMS is assisting consumers in understanding the quality indicator data available in the six pilot states, we posed questions about discrepancies we identified between the indicators and survey deficiencies to staff who field public inquiries received by the Medicare and QIO toll-free telephone numbers. We conducted our work from March through September 2002 in accordance with generally accepted government auditing standards. Results in Brief: Overall, CMS‘s initiative to augment existing public data on nursing home quality has considerable merit but its plan for nationwide implementation in November 2002 is premature. Conceptually, CMS‘s plan encourages consumers making a decision about a nursing home to consider those with positive quality indicator scores”a use of market forces to encourage poorly performing homes to improve quality of care or face the loss of revenue. Such a plan hinges, in part, on appropriate quality indicators that consistently distinguish between good and poor care provided by nursing homes. However, CMS has not yet adequately resolved a number of open issues regarding the appropriateness of the quality indicators selected for public reporting and the accuracy of the underlying data. CMS contracted with two expert organizations, Abt and NQF, to develop and help select quality indicators appropriate for public reporting, but it does not intend to await NQF input before proceeding. In August 2002, CMS announced a set of quality indicators it intends to begin reporting nationally in November 2002. Its selection was based on the results of Abt‘s efforts to validate the nursing home quality indicators it had developed for CMS. Although the full Abt validation report was not available to us as of October 28, 2002, our review of the available portions of the report raised serious questions about the basis for moving forward with national reporting at this time. Moreover, Abt‘s finding that the underlying MDS data are accurate is not convincing in light of the results of earlier studies that identified widespread errors in the accuracy of facility-specific assessments used to calculate some of the quality indicators CMS has selected for November reporting. In 2001, CMS also contracted with the NQF to review Abt‘s work and recommend a set of quality indicators for national reporting but in June 2002 asked the NQF to delay finalizing its recommendations until 2003. The delay will allow NQF to consider both the full Abt validation report and the results of an evaluation of the six-state pilot established to refine the initiative but will not allow CMS to consider NQF‘s input before its planned nationwide implementation. In addition to concerns over the appropriateness of the quality indicators, we found that CMS was not well prepared to respond to consumer questions about the quality data and had not allotted sufficient time to incorporate lessons learned from its six-state pilot. CMS‘s reporting of quality indicators in the pilot states was neither consumer friendly nor presented in a format consistent with the data‘s limitations. For example, reporting homes‘ actual quality indicator scores rather than rankings” whether a home was in the bottom, middle, or top range of homes on a particular score” could make it difficult for consumers to interpret the differences in homes‘ scores and could imply a precision that does not currently exist. Our analysis of the data also demonstrated the potential for public confusion over (1) contradictory information from the quality indicators themselves”almost one-fifth of homes in pilot states had an equal number of highly positive and highly negative quality indicator scores and (2) inconsistencies between quality indicator scores and data on deficiencies identified during nursing home surveys”almost one-fifth of homes with four or more highly positive quality indicator scores and no highly negative scores had at least one serious quality of care deficiency on a recent state survey. Moreover, our telephone calls to the Medicare and QIO toll-free numbers revealed that CMS was not adequately prepared to address questions raised by the public and we received erroneous or misleading information in the majority of calls we placed. Finally, CMS‘s evaluation of the pilot itself is limited and will not be fully completed until sometime in 2003”months after the planned national implementation of the initiative. We are recommending that the Administrator of CMS delay the national reporting of quality indicators until (1) there is greater assurance that quality indicators are appropriate and based on accurate data and (2) a more thorough evaluation of the pilot is completed. In commenting on a draft of this report, CMS reiterated its commitment to continually improve the quality indicators and to work to resolve the issues discussed in our report. However, CMS does not intend to delay its initiative before resolving the issues raised. CMS believes that its quality measurement information is sufficiently reliable, valid, accurate, and useful to move forward with national implementation in November 2002 as planned. Background: Since 1998, the results from state surveys of nursing homes have been the principal source of public information on nursing home quality, which is posted and routinely updated on CMS‘s Nursing Home Compare Web site. Under contract with CMS, states are required to conduct periodic surveys that focus on determining whether care and services meet the assessed needs of the residents and whether homes are in compliance with federal quality requirements, such as preventing avoidable pressure sores, weight loss, or accidents. [Footnote 5] During a survey, a team that includes registered nurses spends several days at a home reviewing the quality of care provided to a sample of residents. States are also required to investigate complaints lodged against nursing homes by residents, families, and others. In contrast to surveys, complaint investigations generally target a single area in response to a complaint filed against a home. Any deficiencies identified during routine surveys or complaint investigations are classified according to the number of residents potentially or actually affected (isolated, pattern, or widespread) and their severity (potential for minimal harm, potential for more than minimal harm, actual harm, and immediate jeopardy). To improve the rigor of the survey process, HCFA contracted for the development of quality indicators and required their use by state surveyors beginning in 1999. [Footnote 6] Quality indicators are derived from data collected during nursing homes‘ assessments of residents, known as the minimum data set (MDS). The MDS contains individual assessment items covering 17 areas, such as mood and behavior, physical functioning, and skin conditions. MDS assessments of each resident are conducted in the first 14 days after admission and periodically thereafter and are used to develop a resident‘s plan of care. [Footnote 7] Facility-reported MDS data are used by state surveyors to help identify quality problems at nursing homes and by CMS to determine the level of nursing home payments for Medicare; some states also use MDS data to calculate Medicaid nursing home payments. Because it also envisioned using indicators to communicate nursing home quality to consumers, HCFA recognized that any publicly reported indicators must pass a very rigorous standard for validity and reliability. Valid quality indicators that distinguish between good and poor care provided by nursing homes would be a useful adjunct to existing quality data. Such indicators must also be reliable”that is, they must consistently distinguish between good and bad care. HCFA contracted with Abt to review existing quality indicators and determine if they were suitable for public reporting. Abt catalogued and evaluated 143 existing quality indicators, including those used by state surveyors. It also identified the need for additional indicators both for individuals with chronic conditions who are long-term residents of a facility and for individuals who enter a nursing home for a short period, such as after a hospitalization (a postacute stay). According to Abt, a main concern about publicly reporting quality indicators was that the quality indicator scores might be influenced by other factors, such as residents‘ health status. Abt concluded that the specification of appropriate risk adjustment models was a key requirement for the validity of any quality indicators. Risk adjustment is important because it provides consumers with an ’apples-to-apples“ comparison of nursing homes by taking into consideration the characteristics of individual residents and adjusting quality indicator scores accordingly. For example, a home with a disproportionate number of residents who are bedfast or who present a challenge for maintaining an adequate level of nutrition”factors that contribute to the development of pressures sores”may have a higher pressure sore score. Adjusting a home‘s quality indicator score to fairly represent to what extent a home does”or does not”admit such residents is important for consumers who may wish to compare one home to another. After several years of work, Abt recommended 39 risk-adjusted quality indicators to CMS in October 2001. Twenty-two were based on existing indicators and the remaining 17 were newly developed by Abt, including 9 indicators for nursing home residents with chronic conditions and 8 indicators for individuals who enter a nursing home for a short period. In September 2001, CMS contracted with the NQF to review Abt‘s work with the objective of (1) recommending a set of quality indicators for use in its planned six-state pilot and (2) developing a core set of indicators for national implementation of the initiative scheduled for late 2002. NQF established a steering committee to accomplish these two tasks. [Footnote 8] The steering committee met in November 2001 and identified 11 indicators for use in the pilot, 9 of which were selected by CMS. The committee made its selection from among Abt‘s list of 39 indicators but it did not recommend use of Abt‘s risk-adjustment approach. Moreover, the steering committee indicated that it would not be limited to the same Abt list in developing its recommended core set of indicators for national implementation. In April 2002, NQF released a draft consensus report identifying the indicators it had distributed to its members and the public for comment on their potential inclusion in the national implementation. [Footnote 9] Under its contract, NQF was scheduled to make a final recommendation to CMS prior to the national reporting of quality indicators. Appropriateness of Quality Indicators Proposed for Public Reporting Is Unresolved: CMS‘s initiative to augment existing public data on nursing home quality has considerable merit but more time is needed to assure that the indicators proposed by CMS for public reporting are appropriate in terms of their validity and reliability. Based on work by Abt to validate the indicators it developed for CMS, the agency selected quality indicators for national reporting. The full Abt validation report”which is important for a thorough analysis of the appropriateness of the quality indicators--was still not available to us as of October 28, 2002. Our review of available portions of the Abt report, however, raised serious questions about whether testing and validation of the selected indicators has been sufficient to move forward with national reporting at this time. Moreover, CMS plans to initiate national reporting before it receives recommendations from NQF, its contractor, on appropriate quality indicators. On August 9, 2002, CMS announced the 10 indicators selected for its nationwide reporting of quality indicators, which it plans to launch in mid-November 2002. CMS selected these indicators from those that Abt had validated in its August 2, 2002, validation report. [Footnote 10] Abt classified the indicators it studied as to the degree of validity”top, middle, or not valid. The indicators that CMS selected were in the top category with one exception”residents in physical restraints”which was in the middle category. The objective of Abt‘s validation study was to confirm that the indicators reflect the actual quality of care that individual nursing facilities provide, after taking into account resident and facility-level characteristics. For example, a validation analysis could confirm that a low percentage of pressure sores among residents was linked to a facility‘s use of procedures to prevent their development. Successful validation reduces the chance that publicly reported data could misrepresent a high-quality facility as a low-quality facility”or vice versa. CMS‘s decision to implement national reporting in November 2002 is troubling, given the issues raised by our review of the available portions of Abt‘s validation report. Although we asked CMS for a copy of Abt‘s 11 technical appendixes, as of October 28, 2002, they were still undergoing review and were not available to us. The technical appendixes are essential to adequately understand and evaluate Abt‘s validation approach. Our review of the available portions of the Abt report raised serious questions about whether the effort to date has been sufficient to validate the indicators. The validation study is based on a sample that is drawn from six states; it is not representative of nursing homes nationwide and may not be representative of facilities in these six states. Selected facilities were allowed to decline participation and about 50 percent did so. For those facilities in the validation study, Abt deemed most of the indicators as valid”that is, better care processes were associated with higher quality indicator scores, taking into account resident and facility-level characteristics. However, we could not evaluate these findings because Abt provided little information on the specific care processes against which the indicators were validated. Unresolved questions also exist about the risk adjustment of the quality indicators. Risk adjustment is a particularly important element in determining certain quality indicators because it may change the ranking of individual facilities”a facility that is among the highest on a particular quality indicator without risk adjustment may fall to the middle or below after risk adjustment”and vice versa. Data released by CMS in March 2002 demonstrated that Abt‘s risk adjustment approaches could either lower or raise facility scores by 40 percent or more. Although such changes in ranking may be appropriate, Abt did not provide detailed information on how its risk adjustment approaches changed facility rankings or a basis for assessing the appropriateness of the changes. In addition to the questions raised by our review of the Abt validation report, CMS is not planning to wait for the expert advice it sought on quality indicators through its contract with the NQF. Under this contract, the NQF steering committee issued a consensus draft in April 2002 with a set of potential indicators for public reporting. The steering committee had planned to complete its review of these indicators using its consensus process by August 2002. [Footnote 11] In late June, however, CMS asked NQF to delay finalizing its recommendations until early 2003 to allow (1) consideration of Abt‘s August 2002 report on the validity of its indicators and risk-adjustment methods”including the technical appendices, when they become available and (2) a review of the pilot evaluation results expected in October 2002. An NQF official told us that the organization agreed to the delay because the proposed rapid implementation timeline had been a concern since the initiative‘s inception. CMS‘s list of quality indicators for the November 2002 national rollout did not include six indicators under consideration by NQF”depression, incontinence, catheterization, bedfast residents, weight loss, and rehospitalization (see app. I). Instead, CMS intends to consider NQF‘s recommendations and revise the indicators used in the mid-November national rollout sometime next year. CMS is also moving forward without a consensus on risk adjustment of quality indicators. CMS is planning to report one indicator with facility-level adjustment based on a profile of residents‘ status at admission, and two indicators both with and without this Abt-developed risk adjuster. [Footnote 12] However, both Abt and NQF have concluded that adjusting for the type of residents admitted to the nursing home required further research to determine its validity. [Footnote 13] We believe that reporting the same indicator with and without facility-level risk adjustment could serve to confuse rather than help consumers. Two of the three consultants hired by NQF specifically recommended against the use of facility-level adjustments in public reporting at this time. We also found that, as of October 1, 2002, CMS had not reached internal consensus on how to describe the risk-adjustment methods used in each of the 10 indicators it plans to begin reporting nationally in November 2002. Several agency officials agreed with our assessment that the descriptions on its Web site were inconsistent with Abt‘s own descriptions of the risk adjustment associated with each indicator. CMS Has Not Addressed Concerns About the Underlying Accuracy of MDS Data Used to Develop Quality Indicators: Two different Abt studies have presented CMS with conflicting messages about the accuracy of MDS data. Abt‘s August 2002 quality indicator validation report suggested that the underlying data used to calculate most indicators were, in the aggregate, very reliable. However, our analysis of more detailed facility-level data in a February 2001 Abt report raised questions about the reliability of some of the same MDS data. Because MDS data are used by CMS and some states to determine the level of nursing home payments for Medicare and Medicaid and to calculate quality indicators, ensuring its accuracy at the facility level is critical both for determining appropriate payments and for public reporting of the quality indicators. Recognizing the importance of accurate MDS data, CMS is in the process of implementing a national MDS accuracy review program expected to become fully operational in 2003, after the nationwide reporting of quality indicators begins in November 2002. We recently reported that CMS‘s review program is too limited in scope to provide adequate confidence in the accuracy of MDS assessments in the vast bulk of nursing homes nationwide. [Footnote 14] Abt‘s August 2, 2002, validation report concluded that the reliability of the underlying MDS data used to calculate 39 quality indicators ranged from acceptable to superior, with the data for only 1 indicator proving unacceptable. [Footnote 15] Abt‘s findings were based on a comparison of assessments conducted by its own nurses to assessments performed by the nursing home staff in 209 sample facilities. For each quality indicator, Abt reported the overall reliability for all of the facilities in its sample. [Footnote 16] However, because quality indicators will be reported for each nursing home, overall reliability is not a sufficient assurance that the underlying MDS data are reliable for each nursing home. Although Abt did not provide information on MDS reliability for individual facilities, it noted that reliability varied considerably within and across states. Earlier work by Abt and others calls into question the reliability of MDS data. Abt‘s February 2001 report on MDS data accuracy identified significant variation in the rate of MDS errors across the 30 facilities sampled. [Footnote 17] Differences between assessments conducted by Abt‘s nurses and the nursing home staff were classified as errors by Abt. Error rates for all MDS items averaged 11.7 percent but varied across facilities by a factor of almost two”from 7.8 percent to 14.5 percent. As shown in figure 1, the majority of error rates were higher than 10.5 percent. Furthermore, error rates for some of the individual MDS items used to calculate the quality indicators were much higher than the average error rate. [Footnote 18] According to Abt, the least accurate sections of the MDS included physical functioning and skin conditions. Abt also noted that there was a tendency for facilities to underreport residents with pain. [Footnote 19] MDS items from these portions of the assessment are used to calculate several quality indicators that CMS plans to report nationally in November 2002”activities of daily living, pressure sores, and pain management. Table 1 shows that the error rate across the residents sampled ranged from 18 percent for pressure sores to 42 percent for pain intensity. [Footnote 20] Abt‘s February 2001 findings were consistent with areas that states have identified as having a high potential for error, including activities of daily living and skin conditions. [Footnote 21] Moreover, a study by the HHS Office of Inspector General (OIG), which identified differences between the MDS assessment and the medical record, found that activities of daily living was among the areas that provided the greatest source of differences. [Footnote 22] In addition, the OIG report noted that 40 percent of the nursing home MDS coordinators it surveyed identified the physical functioning section, used to calculate the quality indicator on activities of daily living, as the most difficult to complete. Some coordinators explained that facility staff view a resident‘s capabilities differently and thus the assessments tend to be subjective. Figure 1: Error Rates in MDS Assessments in 30 Nursing Homes Reviewed by Abt: [See PDF for image] This figure is a vertical bar graph depicting the error rates in MDS assessments in 30 nursing homes reviewed by Abt. The vertical axis of the graph represents number of nursing homes from 0 to 8. The horizontal axis of the graph represents percent of assessments with errors. The following data is depictd: Percent of assessments with errors: Less than or equal to 8.5; Number of nursing homes: 1. Percent of assessments with errors: 8.6-9.5; Number of nursing homes: 2. Percent of assessments with errors: 9.6-10.5; Number of nursing homes: 4. Percent of assessments with errors: 10.6-11.5; Number of nursing homes: 6. Percent of assessments with errors: 11.6-12.5; Number of nursing homes: 5. Percent of assessments with errors: 12.6-13.5; Number of nursing homes: 7. Percent of assessments with errors: 13.6-14.5; Number of nursing homes: 5. Source: GAO analysis of data from Abt Associates, Inc., Minimum Data Set Accuracy. [End of figure] Table 1: Error Rates in MDS Items Used to Develop Selected Quality Indicators: MDS item: Physical functioning: used to calculate quality indicator on decline in activities of daily living: Bed mobility; Error rate (percent): 39. MDS item: Physical functioning: used to calculate quality indicator on decline in activities of daily living: Transfer; Error rate (percent): 34. MDS item: Physical functioning: used to calculate quality indicator on decline in activities of daily living: Eating; Error rate (percent): 37. MDS item: Physical functioning: used to calculate quality indicator on decline in activities of daily living: Toilet use; Error rate (percent): 35. MDS item: Skin condition: used to calculate quality indicator on prevalence of pressure sores: Pressure sore; Error rate (percent): 18. MDS item: Health condition: used to calculate quality indicator on inadequate pain management: Pain frequency; Error rate (percent): 39. MDS item: Health condition: used to calculate quality indicator on inadequate pain management: Pain intensity; Error rate (percent): 42. Source: Abt Associates, Inc., Minimum Data Set Accuracy. [End of table] As part of CMS‘s efforts to improve MDS accuracy, its contractor is still field-testing the on-site aspect of its approach, which is not expected to be implemented until 2003. [Footnote 23] Although Abt‘s February 2001 report found widespread MDS errors, CMS intends to review roughly 1 percent of the MDS assessments prepared over the course of a year, which numbered 14.7 million in 2001. Moreover, only 10 percent of the reviews will be conducted on-site at nursing homes. In contrast, our prior work on MDS found that 9 of the 10 states with MDS-based Medicaid payment systems that examine MDS data‘s accuracy conduct periodic on-site reviews in all or a significant portion of their nursing homes, generally examining from 10 to 40 percent of assessments. On-site reviews heighten facility staff awareness of the importance of MDS data and can lead to the correction of practices that contribute to MDS errors. We reported earlier that CMS‘s approach may yield some broad sense of the accuracy of MDS assessments on an aggregate level but is insufficient to provide confidence about the accuracy of MDS assessments in the vast bulk of nursing homes nationwide. The Public May Be Confused by Quality Data and CMS Is Not Prepared to Respond to Consumers‘ Questions: While CMS is strongly committed to making more information available to the public on nursing home quality and such an initiative has considerable merit, the agency had not demonstrated a readiness to assist the public in understanding and using those data. We found that CMS‘s reporting of quality indicators in the six pilot states was neither consumer friendly nor reported in a format consistent with the data‘s limitations, implying a greater degree of precision than is currently warranted. Our analysis of the data currently available in the six pilot states demonstrated the potential for public confusion over both the quality indicators themselves and inconsistencies with other available data on deficiencies identified during nursing home surveys”which, to date, are the primary source of public data on nursing home quality. Moreover, our phone calls to the Medicare and QIO toll-free numbers revealed that CMS was not adequately prepared to address consumers‘ questions raised by discrepancies between conflicting sources of quality data. Our review of the quality indicators on the CMS Web site found that the presentation of the data was not consumer friendly and that the reporting format implies a greater confidence in the data‘s precision than may be warranted at this time. Quality indicators are reported as the percentage of residents in a facility having the particular characteristics measured by each indicator. The Web site explains that having a low percentage of residents with pressure sores or pain is better than having a high percentage. In the six-state pilot, the public can compare a nursing home‘s score to the statewide and overall average for each quality indicator. We believe that equating a high score with poor performance is counterintuitive and could prove confusing to consumers. [Footnote 24] Despite the Web site‘s explanation of how to interpret the scores, the public might well assume that a high score is a positive sign. In addition, reporting actual quality indicator scores rather than the range of scores a home falls into for an indicator” a low, medium, or high score” can be confusing and implies a confidence in the precision of the results that is currently a goal rather than a reality. Consumers will find it difficult to assess a home with a score that is 5 to 10 percentage points from the state average. Such a home could be an outlier”one of the best or the worst on that indicator; alternatively, it could be that the home was close to the state average because the outliers involved much larger differences. Concerns about the validity of the indicators and the potential reliability of the data make comparisons of homes with similar scores questionable. Consumers may be misled if a difference of several percentage points between two homes is perceived as demonstrating that one is better or worse than the other. To partially address these types of concerns, Maryland has reported quality indicator data on its own Web site since August 2001 in ranges rather than individual values. Thus, it indicates if a facility falls into the bottom 10 percent, the middle 70 percent, or the top 20 percent of facilities in the state. Consumers may also be confused about how to interpret missing information. Although the CMS Web site explains that quality indicator scores are not reported for nursing homes with too few residents, it does not acknowledge the extent of such missing data. We found that 6 percent of all nursing homes in the six pilot states have no score for any of the nine quality indicators and that, for individual indicators, from 9 percent to 40 percent of facilities have missing scores (see table 2). [Footnote 25] When data for homes of potential interest to consumers are not reported, consumers may need some assistance in how to incorporate such instances into their decisionmaking. Table 2: Percentage of Nursing Homes in the Six Pilot States with Missing Quality Indicator Scores: Quality indicators: Chronic-care quality indicators: Decline in activities of daily living; Percentage of nursing homes with missing score: 19%. Quality indicators: Chronic-care quality indicators: Infections; Percentage of nursing homes with missing score: 16%. Quality indicators: Chronic-care quality indicators: Inadequate pain management; Percentage of nursing homes with missing score: 16%. Quality indicators: Chronic-care quality indicators: Pressure sores; Percentage of nursing homes with missing score: 16%. Quality indicators: Chronic-care quality indicators: Physical restraints used daily; Percentage of nursing homes with missing score: 9%. Quality indicators: Chronic-care quality indicators: Weight loss; Percentage of nursing homes with missing score: 21%. Quality indicators: Short-stay quality indicators: Failure to improve and manage delirium; Percentage of nursing homes with missing score: 36%. Quality indicators: Short-stay quality indicators: Inadequate pain management; Percentage of nursing homes with missing score: 35%. Quality indicators: Short-stay quality indicators: Improvement in walking; Percentage of nursing homes with missing score: 40%. Source: GAO analysis of CMS quality indicator data available on its Web site for the six pilot states. [End of table] Consumer confusion may also occur when quality indicator scores send conflicting messages about the overall quality of care at a home. We found that the Web site data for a significant number of facilities contained such inconsistencies. Seventeen percent of nursing homes in the six pilot states had an equal number of highly positive and highly negative quality indicator scores. We defined highly positive scores as those indicating that a facility was among the 25 percent of homes with the lowest percentage of residents exhibiting poor outcomes, such as a decline in their ability to walk or use the toilet. In contrast, facilities with a highly negative score were among the top 25 percent of homes with poor outcomes. We also found that 37 percent of nursing homes with four or more highly positive quality indicator scores had two or more highly negative scores. [Footnote 26] In addition, our comparison of survey deficiency data available on the Web site with quality indicator scores also revealed inconsistencies. For example, 17 percent of nursing homes with four or more highly positive quality indicator scores and no highly negative scores”seemingly ’good“ nursing homes”had at least one serious quality-of-care deficiency on a recent state survey. We have found that serious deficiencies cited by state nursing home surveyors were generally warranted and indeed reflected instances of documented actual harm to nursing home residents. [Footnote 27] Moreover, 73 percent of nursing homes with four or more highly negative quality indicator scores”seemingly ’bad“ facilities”had no serious quality-of-care deficiencies on a recent survey (see table 3). The latter situation is consistent with our past work that surveyors often miss serious quality-of-care problems. [Footnote 28] Nevertheless, consumers will generally lack such insights on the reliability of state surveys that would permit them to better assess the available data on quality of care. Table 3: Comparison of Publicly Reported Nursing Home Quality Indicator Scores and Quality-of-Care Survey Deficiencies in Six Pilot States: State: Colorado; Nursing homes with four or more highly positive quality indicator scores and no highly negative quality indicator scores that had at least one serious quality-of-care survey deficiency (percent): 0%; Nursing homes with four or more highly negative quality indicator scores that had no serious quality-of-care survey deficiency (percent): 75%. State: Florida; Nursing homes with four or more highly positive quality indicator scores and no highly negative quality indicator scores that had at least one serious quality-of-care survey deficiency (percent): 11%; Nursing homes with four or more highly negative quality indicator scores that had no serious quality-of-care survey deficiency (percent): 81%. State: Maryland; Nursing homes with four or more highly positive quality indicator scores and no highly negative quality indicator scores that had at least one serious quality-of-care survey deficiency (percent): 29%; Nursing homes with four or more highly negative quality indicator scores that had no serious quality-of-care survey deficiency (percent): 78%. State: Ohio; Nursing homes with four or more highly positive quality indicator scores and no highly negative quality indicator scores that had at least one serious quality-of-care survey deficiency (percent): 27%; Nursing homes with four or more highly negative quality indicator scores that had no serious quality-of-care survey deficiency (percent): 69%. State: Rhode Island; Nursing homes with four or more highly positive quality indicator scores and no highly negative quality indicator scores that had at least one serious quality-of-care survey deficiency (percent): 0%; Nursing homes with four or more highly negative quality indicator scores that had no serious quality-of-care survey deficiency (percent): 92%. State: Washington; Nursing homes with four or more highly positive quality indicator scores and no highly negative quality indicator scores that had at least one serious quality-of-care survey deficiency (percent): 25%; Nursing homes with four or more highly negative quality indicator scores that had no serious quality-of-care survey deficiency (percent): 47%. State: Total; Nursing homes with four or more highly positive quality indicator scores and no highly negative quality indicator scores that had at least one serious quality-of-care survey deficiency (percent): 17%; Nursing homes with four or more highly negative quality indicator scores that had no serious quality-of-care survey deficiency (percent): 73%. Note: A serious quality-of-care deficiency indicates that surveyors found actual harm to residents. Source: GAO analysis of quality indicator and survey deficiency data available on CMS‘s Web site for the six pilot states. [End of table] With the apparent need for assistance to consumers in interpreting and using this information, the important role of the Medicare and QIO toll-free numbers is evident. We requested and reviewed copies of the Medicare hotline and QIO scripts and found that they did not address the issue of responding to questions about conflicting or confusing quality data. Furthermore, our calls to the Medicare hotline and to QIO toll-free numbers in the six pilot states demonstrated that the staff were not adequately prepared to handle basic questions about the quality data available under the pilot. CMS officials had told us that Medicare hotline callers with complicated questions would be seamlessly transferred to a QIO without having to hang up and call another number. Although we asked the Medicare hotline staff if another organization might be better able to respond to our questions, no one offered to refer us to QIOs, even when we specifically asked about them. In fact, one hotline staff member told us that a QIO would not be an appropriate referral. Consequently, we independently attempted to call the QIOs in the six pilot states. We found that it was difficult to reach a QIO staff member qualified to answer questions. Each QIO had a toll-free number but neither the automated recordings at four QIOs nor operators at the remaining two indicated that the caller had reached a QIO. [Footnote 29] In addition, the automated recordings did not contain a menu choice for questions about nursing home quality indicators. [Footnote 30] We were unable to contact one QIO because the hotline had neither an operator nor a voice mail capability. On other calls, after reaching a QIO staff person, it frequently took several referrals to identify an appropriate contact point. One QIO took 5 working days for a staff member to call us back. Four of the five QIOs we contacted explained that their primary role was to work with nursing homes to improve quality of care. In general, QIO staff were not prepared to respond to consumer questions. Staff at the Medicare hotline and the QIOs varied greatly in their basic understanding of quality indicators and survey deficiencies. While two of the nine staff we contacted were generally knowledgeable about different types of quality data, others were unable to answer simple questions and the majority provided erroneous or misleading data. One QIO staff member told us that MDS data were not representative of all residents of a nursing home but only presented a ’little picture“ based on a few residents. However, assessments of all residents are taken into consideration in calculating quality indicators. When we expressed concern about a home identified on the Web site with a ’level-3“ deficiency, a Medicare hotline staff member incorrectly told us that it was not a serious deficiency because level 3 indicated potential harm. [Footnote 31] CMS designates actual harm deficiencies as ’level-3“ deficiencies. A QIO staff member incorrectly told us that actual harm pressure sore deficiencies had nothing to do with patient care and might be related to paperwork. Our review of survey reports has shown that actual harm deficiencies generally involved serious quality-of-care problems resulting in resident harm. [Footnote 32] Generally, hotline staff did not express a preference for using either nursing home surveys or quality indicators in choosing a nursing home. Two QIO staff, however, stated that the nursing home survey information gave a better picture of nursing home care than the quality indicators, which they judged to be imprecise and subject to variability. Pilot Evaluation Is Limited and Will Not Be Completed Prior to National Reporting of Quality Indicators: CMS‘s evaluation of the pilot is limited and will not be completed prior to national reporting of quality indicators because of the short period of time between the launch of the pilot and the planned November 2002 national implementation. According to CMS officials, the pilot evaluation was never intended to help decide whether the initiative should be implemented nationally or to measure the impact on nursing home quality. While CMS is interested in whether nursing home quality actually improves as a result of the initiative, it will be some time before such a determination can be made. Thus, CMS focused the pilot evaluation on identifying improvements that could be incorporated into the initiative‘s design prior to the scheduled national implementation in November 2002. A CMS official told us that initial pilot evaluation results were expected by early October 2002, allowing just over a month to incorporate any lessons learned. In commenting on a draft of this report, CMS stated that it was using preliminary findings to steer national implementation. [Footnote 33] The final results of the pilot evaluation will not be completed until sometime in 2003. CMS‘s evaluation of the pilot is focused on identifying how to communicate more effectively with consumers about the initiative and how to improve QIO interaction with nursing homes. Specifically, CMS will assess whether (1) the target audiences were reached; (2) the initiative increased consumer use of nursing home quality information; [Footnote 34] (3) consumers used the new information to choose a nursing home; (4) QIO activities influenced nursing home quality improvement activities; (5) nursing homes found the assistance provided by QIOs useful; and (6) the initiative influenced those who might assist consumers in selecting a nursing home, such as hospital discharge planners and physicians. Information is being collected by conducting consumer focus groups, tracking Web site ’hits“ and toll-free telephone inquiries, administering a Web site satisfaction survey, and surveying nursing homes, hospital discharge planners, and physicians. As of late August 2002, CMS teams were also in the process of completing site visits to stakeholders in the six pilot states, including QIOs, nursing homes, ombudsmen, survey agencies, nursing home industry representatives, and consumer advocacy groups. The teams‘ objective is to obtain a first-hand perspective of how the initiative is working with the goal of implementing necessary changes and better supporting the program in the future. Conclusions: Although CMS‘s initiative to publicly report nursing home quality indicators is a commendable and worthwhile goal, we believe that it is important for CMS to wait for and consider input from NQF and make necessary adjustments to the initiative based on its input. We believe several factors demonstrate that CMS‘s planned national reporting of quality indicators in November 2002 is premature. Our review of the available portions of Abt‘s validation report raised serious questions about whether the effort to date has been sufficient to validate the quality indicators. NQF was asked to delay recommending a set of indicators for national reporting until 2003, in part, to provide sufficient time for it to review Abt‘s report. Although limited in scope, CMS‘s planned MDS accuracy review program will not begin on-site accuracy reviews of the data underlying quality indicators until 2003. CMS‘s own evaluation of the pilot, designed to help refine the initiative, was limited to fit CMS‘s timetable for the initiative and the preliminary finding were not available until October 2002, leaving little time to incorporate the results into the planned national rollout. Other aspects of the evaluation will not be available until early 2003. We also have serious concerns about the potential for public confusion over quality data, highlighting the need for clear descriptions of the data‘s limitations and easy access to informed experts at both the Medicare and QIO hotlines. CMS has not yet demonstrated its readiness to meet these consumer needs either directly or through the QIOs. Recommendations for Executive Action: To ensure that publicly reported quality indicator data accurately reflect the status of quality in nursing homes and fairly compare homes to one another, we recommend that the Administrator of CMS delay the implementation of nationwide reporting of quality indicators until: * there is greater assurance that the quality indicators are appropriate for public reporting”including the validity of the indicators selected and the use of an appropriate risk-adjustment methodology”based on input from the NQF and other experts and, if necessary, additional analysis and testing; and; * a more thorough evaluation of the pilot is completed to help improve the initiative‘s effectiveness, including an assessment of the presentation of information on the Web site and the resources needed to assist consumers‘ use of the information. Comments from CMS and the NQF and Our Evaluation: CMS and the NQF reviewed and provided comments on a draft of this report. (See app. II and app. III, respectively). CMS reiterated its commitment to continually improve the quality indicators and to work to resolve the issues discussed in our report. Although CMS stated it would use our report to help improve the initiative over time, it intends to move forward with national implementation in November 2002 as planned. It stated that ’waiting for more reliability, more validity, more accuracy, and more usefulness will delay needed public accountability, and deprive consumers, clinicians, and providers of important information they can use now.“ The NQF commented that it unequivocally supports CMS‘s plans to publicly report quality indicators but indicated that the initiative would benefit from a short-term postponement of 3 to 4 months to achieve a consensus on a set of indicators and to provide additional time to prepare the public on how to use and interpret the data. We continue to support the concept of reporting quality indicators, but remain concerned that a flawed implementation could seriously undercut support for and the potential effectiveness of this very worthwhile initiative. CMS‘s comments and our evaluation focused largely on two issues: (1) the selection and validity of quality indicators, and (2) lessons learned from CMS‘s evaluation of the pilot initiative. Selection and Validity of Quality Indicators: CMS asserts that the quality indicators it plans to report nationally are reliable, valid, accurate, and useful and that it has received input from a number of sources in selecting the indicators for this initiative. However, CMS provided no new evidence addressing our findings regarding the appropriateness of the quality indicators selected for public reporting and the accuracy of the underlying data. We continue to believe that, prior to nationwide implementation, CMS should resolve these open issues. CMS intends to move forward with nationwide implementation without a requested NQF assessment of the full Abt validation report and without NQF‘s final recommendations on quality indicators. CMS would not share the technical appendices to Abt‘s validation report with us because they were undergoing review and revision. The technical appendices are critical to assessing Abt‘s validation approach. CMS‘s comments did not address our specific findings on the available portions of Abt‘s validation report, including: (1) the validation results are not representative of nursing homes nationwide because of limitations in the selection of a sample of nursing homes to participate in the validation study, and (2) Abt provided little information on the specific care processes against which the indicators were validated or how its risk adjustment approaches changed facility rankings and the appropriateness of the changes. Although both Abt and the NQF concluded that Abt‘s facility-level risk adjustment approach required further research to determine its validity, CMS plans to report two indicators with and without facility-level adjustments. CMS‘s comments indicated that it has chosen to report these measures both ways in order to evaluate their usefulness and to allow facilities and consumers the additional information. We continue to believe that reporting data of uncertain validity is inappropriate and, as such, will likely not be useful to either facilities or consumers. For quality indicators to be reliable, the underlying MDS data used to calculate the indicators must be accurate. CMS‘s comments did not specifically address the conflicting findings on MDS accuracy from Abt‘s August 2002 validation report and its February 2001 report to CMS. Abt‘s August 2002 validation report concluded that, in aggregate, the underlying MDS data were very reliable but that the reliability varied considerably within and across states. Aggregate reliability, however, is insufficient because quality indicators are reported separately for each facility. In its February 2001 report to CMS, Abt identified widespread errors in the accuracy of facility-specific assessments used to calculate some of the quality indicators that CMS has selected for reporting in November. CMS indicated that its efforts since 1999 have improved MDS accuracy. But because CMS does not plan to begin limited on-site MDS accuracy reviews until 2003, there is little evidence to support this assertion. Lessons Learned from CMS‘s Evaluation of the Pilot Initiative: CMS commented that findings from a number of activities evaluating the six-state pilot were not available prior to the time we asked for comments on our draft report. While final reports are not yet available for some of these studies, CMS stated that the pilot allowed it to work through important issues and incorporate lessons learned before a national launch. We pointed out that the pilot evaluation was limited and incomplete”an additional reason to delay the initiative. CMS also did not evaluate a key implementation issue”the adequacy of assistance available to consumers through its toll-free telephone hotlines. Moreover, the lack of formal evaluation reports to help guide the development of a consensus about key issues, such as how quality indicators should be reported, is troubling. In its comments, CMS stated that it was committed to working aggressively to help the public understand nursing home quality information using lessons learned from the pilot. However, CMS learned about the flaws in its hotline operations not from its pilot evaluation but from our attempts to use the Medicare and QIO toll-free phone numbers to obtain information on quality data. Acknowledging the weaknesses we identified, the agency laid out a series of actions intended to strengthen the hotlines‘ ability to respond to public inquiries, such as providing additional training to customer service representatives prior to the national launch of the initiative. CMS outlined other steps it plans to take such as providing its customer service representatives with new scripts and questions and answers to the most frequently asked questions. At the outset of the pilot in April 2002, CMS described seamless transfers from the Medicare to the QIO hotlines for complicated consumer questions but now acknowledges that limitations in QIO telephone technology prevent such transfers. Instead of automatic transfers, CMS stated that, when referrals to QIOs are necessary, callers will be provided with a direct toll-free phone number. CMS also commented that consumers should be encouraged to consider multiple types of information on nursing home quality. While we agree, we believe it is critical that customer service representatives have a clear understanding of the strengths and limitations of different types of data to properly inform consumers when they inquire. CMS commented that we offered no explanation of the analysis that led us to conclude that (1) consumers could be confused because scores on quality indicators can conflict with each other and the results of routine nursing home surveys, and (2) the public may confuse a high quality indicator score with a positive result. Our draft clearly states that our findings were based on our analysis of the quality indicator data and survey results available in the six pilot states”a database that CMS provided at our request. In its comments, CMS provided limited data to support its assertion that consumers are not confused by the quality indicators and are very satisfied with the current presentation on its Web site. According to CMS, over two-thirds of respondents to its August 2002 online satisfaction survey of randomly chosen users of Nursing Home Compare information said they were highly satisfied with the information, for example, it was clearly displayed, easy to understand, and valuable. It is not clear, however, that these responses were representative of all nursing home consumers accessing the Web site, as CMS implied. For example, CMS informed us that this survey was part of a larger survey of all Medicare Web site users, which had a low overall response rate of 29 percent. Moreover, of the 654 respondents to the Nursing Home Compare component of the survey, fewer than half (40 percent) were identified as Medicare beneficiaries, family members, or friends. NQF feedback to CMS on its Web site presentation was consistent with our findings. In commenting on our draft report, NQF noted that it had offered informal guidance to CMS, such as using positive or neutral wording to describe indicators, exploring alternative ways of presenting information about differences among facilities, and ensuring that the presentation of the data reflects meaningful differences in topics important to consumers. While justifying its current presentation of quality indicator data, CMS commented that it is seriously considering not reporting individual nursing home scores but rather grouping homes into ranges such as the bottom 10 percent, middle 70 percent, and top 20 percent of facilities in a state. Such a change, however, would not come before the national rollout. We agree with CMS that, when grouping homes into ranges, homes on the margin”close to the bottom 10 percent or top 20 percent”may not be significantly different from one another. However, the same is true of reporting individual facility scores. Moreover, reporting ranges more clearly identifies homes that are outliers for consumers. Additional CMS Comments: CMS also commented on our characterization of the scope of the nursing home quality initiative. CMS stated that we had narrowly framed the initiative as one designed solely for consumers, ignoring the QIO‘s quality improvement activities with individual nursing homes requesting assistance. Our report acknowledged and briefly outlined the quality improvement role of the QIOs. However, based on our requestors‘ concerns about the relatively short pilot timeframe prior to national implementation of public reporting of quality indicators, we focused our work on that key aspect of the initiative. CMS cited its Interim Report on Evaluation Activities for the Nursing Home Quality Initiative to support its conclusion that the initiative was successful in promoting quality improvement activities among nursing homes. The improvements cited in the Interim Report were self-reported by facilities and CMS offered no insights on the nature of the quality improvement changes. The Interim Report was not available when we sent our draft report to CMS for comment. CMS provided several technical comments which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at [hyperlink, http://www.gao.gov]. If you or your staff have any questions, please call me at (202) 512-7118 or Walter Ochinko at (202) 512-7157. GAO staff acknowledgments are listed in appendix IV. Signed by: Kathryn G. Allen: Director: Health Care”Medicaid and Private Health Insurance Issues: [End of section] Appendix I: Comparison of Quality Indicators Proposed by NQF and CMS for National Rollout: Indicator: Chronic care (long-stay resident) quality indicators: Decline in activities of daily living; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Check]. Indicator: Chronic care (long-stay resident) quality indicators: Pressure ulcers, high and low risk; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Empty]. Indicator: Chronic care (long-stay resident) quality indicators: Pressure ulcers; Draft NQF indicators for national reporting: [Empty]; CMS indicators for the national rollout: [Check]. Indicator: Chronic care (long-stay resident) quality indicators: Pressure ulcers[A]; Draft NQF indicators for national reporting: [Empty]; CMS indicators for the national rollout: [Check]. Indicator: Chronic care (long-stay resident) quality indicators: Inadequate pain management; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Check]. Indicator: Chronic care (long-stay resident) quality indicators: Physical restraints used daily; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Check]. Indicator: Chronic care (long-stay resident) quality indicators: Infections; Draft NQF indicators for national reporting: [Empty]; CMS indicators for the national rollout: [Check]. Indicator: Chronic care (long-stay resident) quality indicators: Weight loss; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Empty]. Indicator: Chronic care (long-stay resident) quality indicators: Depression without therapy; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Empty]. Indicator: Chronic care (long-stay resident) quality indicators: Incontinence; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Empty]. Indicator: Chronic care (long-stay resident) quality indicators: Catheterization; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Empty]. Indicator: Chronic care (long-stay resident) quality indicators: Bedfast residents; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Empty]. Indicator: Postacute (short-stay resident) quality indicators: Failure to improve and manage delirium; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Check]. Indicator: Postacute (short-stay resident) quality indicators: Failure to improve and manage delirium[A]; Draft NQF indicators for national reporting: [Empty]; CMS indicators for the national rollout: [Check]. Indicator: Postacute (short-stay resident) quality indicators: Inadequate pain management; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Check]. Indicator: Postacute (short-stay resident) quality indicators: Improvement in walking; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Check]. Indicator: Postacute (short-stay resident) quality indicators: Rehospitalizations; Draft NQF indicators for national reporting: [Check]; CMS indicators for the national rollout: [Empty]. [A] This indicator is listed twice because CMS plans to report it with and without facility-level adjustment. Source: NQF and CMS. [End of table] Appendix II: Comments from the Centers for Medicare & Medicaid Services: Department of Health and Human Services: Centers for Medicare and Medicaid Services: Administrator: Washington, DC 20201: Date: October 22: To: Kathryn G. Allen: Director: Health Care Medicaid And Private Health Insurance Issues: From: Thomas A. Scully Administrator: Subject: General Accounting Office (GAO) Draft Report, Nursing Homes: Public Reporting of Quality Indicators Has Merit, But National Implementation Is Premature, (GAO-03-187): We appreciate the opportunity to review and offer comments on the above-referenced report. The Centers for Medicare & Medicaid Services (CMS) is the agency that purchases healthcare for 40 million Medicare beneficiaries, and partners with states to purchase healthcare for another 30 million people with Medicaid. As such, we take very seriously our charge to assure that we use every strategy available to us so that taxpayer dollars are used to finance continued improvements in the quality of care. We welcome GAO's report, because the issues raised make us more convinced than ever that moving forward to launch the Nursing Home Quality Initiative (NHQI) is the right thing to do. We believe the quality measurement information we are sharing is reliable, valid, accurate, and useful. Waiting for more reliability, more validity, more accuracy, and more usefulness will delay needed public accountability, and deprive consumers, clinicians, and providers of important information they can use now. Quality of care in nursing homes will improve more quickly through the positive stimulus of this initiative now. The CMS is committed to continually improving the quality measures and working to resolve the issues discussed in this report. For its role in elucidating further work that needs to be done, we appreciate this report from GAO. We will use this to help us improve the initiative over time. We intend to move forward with the national implementation as planned. The goal of the NHQI is exactly to help nursing homes to provide an even higher quality of care for all of their residents. We arc using three strategies to achieve this goal. First, we are continuing to enforce the standards against which nursing homes are measured, assuring that certain foundational activities in nursing homes take place. Second. we are providing consumers with information to make choices and to engage in discussion about quality with their physicians and providers. And third, we are offering significant quality improvement technical assistance to nursing homes through our Medicare Quality Improvement Organizations (QIOs). The quality measures for this initiative have three intended uses. First, we aim to improve the opportunity for beneficiaries to exercise choice. In explaining the use of this information, we emphasize to beneficiaries that this is only one aspect of the many pieces of information that are available in making choices, and that our Web site contains more information and suggestions on how to use all of the information to choose a nursing home. Second, we want to improve communication, getting all involved parties to talk about the quality of care that is provided and received. Third, the measures are intended to he a stimulus and tool for the industry to help nursing facilities improve the quality of care provided to their residents. I have personally spent hundreds of hours as has Secretary Thompson, meeting with nursing homes, patient advocates, unions, academies and every imaginable party over the past year to develop this program. We are convinced that we have conducted a fully, thorough and complete process. This process of developing health quality standards is twenty years overdue and CMS and HHS strongly believe that these measures are timely, well developed, and essential to quality long term care. We have a number of specific comments regarding this report. GAO's Framework: We believe GAO has narrowly framed the initiative as one designed solely for consumers. This ignores the quality improvement activities conducted by the QIOs which are a key component of this initiative. The QIOs in the six pilot states and even in some non-pilot states ” have had unprecedented numbers of nursing homes request technical assistance to improve quality. Half the nursing homes in the pilot states requested QIO assistance. This represents a substantial increase from the pre-pilot. Furthermore, approximately three-quarters of the facilities in these states indicated that they have made, or plan to make, improvements in process. Three-quarters of these facilities said that the NHQI was somewhat, fairly, or very influential in their decision to do so. The focus on consumers also ignores the success of the quality initiative with other audiences. Additional audiences include the nursing homes themselves and referral sources, or information intermediaries. Nursing homes are an important audience because they arc stimulated by this information to improve the quality of care they provide to residents. information intermediaries or referral sources are important because they interact directly with the nursing homes and the consumers regarding nursing home placements. The GAO's framework also ignores the valuable role played by State Survey Agencies (SSAs). The SSAs continue their valuable regulatory role, but are also partnering in new ways with providers and QlOs to share best practices and provide tools for improvement. The QIOs, providers, information intermediaries, and SSAs are all playing important roles, and thus, this initiative extends well beyond a consumer focus. Lessons Learned From Pilot Initiative: We are committed to helping the public understand nursing home quality information. We have used the lessons leaned from the pilot initiative to work aggressively with our QlOs to establish systems of support that will prepare them to respond to the needs of consumers. In addition, QIOs have, during the pilot, encouraged consumers to consider many sources of available information (including state-sponsored web sites with quality and other nursing home information, survey data, facility-specific characteristics, a visit to potential nursing homes, etc.) to help them select a nursing home that best fits their needs. The QIOs will continue to use this approach during the national implementation of the Initiative. We also wanted to give consumers one Web site and telephone point of reference to use to obtain initial information on the quality measures. We used [hyperlink, http://www.medicare.gov] and 1-800-Medicare for that purpose, with the intent of having callers referred to their state QIO when additional or more detailed information was required. While the telephone technology at 1-800-Medicare does allow for a direct transfer of calls from 1-800-Medicare to certain other organizations, the 1-800-Medicare staff determined not to directly transfer callers to the QIOs, due to limited/or lack of consistency in the telephone technology at the QIOs. Instead, when referrals to the QIOs are necessary, the I-800-MEDICARE customer service representatives (CSRs) provide callers with the direct 1-800 phone line to the appropriate QIO. Based on the experience of the pilot, we are revising the scripts that the 1-800-Medicare CSRs will use to handle the incoming nursing home-related calls. In addition, we are refining the process through which each call will be handled, both by the 1-800-Medicare CSRs and the QIO staff: Additional training for the 1-800-Medicare CSRs, as well as QIO staff, will be provided prior to the national launch to ensure that all are able to handle the consumer inquiries. In addition, we will further educate the CSRs and the QIO staff on the roles of the various partners in the initiative, so that all understand what aspect of the project each is responsible for handling. All QIOs maintain consumer help lines to address a number of topics related to their Medicare contracts, one of which is the nursing home initiative. As such, the help line menu options and the call flow are not tailored specifically to the needs of callers looking for information about nursing homes. Using the training mentioned above, we plan to provide additional instruction to the QIOs as to how to appropriately answer calls related to the nursing home initiative. This training will include a revised set of questions and answers from which the QIO staff can obtain answers to the most frequently asked questions. Refresher training will be held on an on-going basis for the CSRs and QIO staff so that they are kept current with the information they provide. The QIO level of understanding of the survey and certification process does vary. The QIOs are not expected to have a detailed understanding of the state survey process. However, it is expected that QIOs will be able to direct callers to other sources of information on those data, such as the state's own Web site. In addition, QIOs should not make recommendations or express a preference to consumers for using either nursing home surveys or quality indicators in making nursing home decisions. The responsibility of the QIO is to make the consumers aware of the existence of the data, to explain their meaning, and to encourage consumers to use the data as a part of their decision making process. Measure Selection: Regarding GAO's criticism of the measures selected by CMS, the agency has received input from a number of sources in selecting the measures for this initiative. In particular, we have appreciated input from the National Quality Forum (NQF) steering committee and membership. We look forward to their further deliberation and final consensus on a set of standard nursing home measures. We believe that this is the correct sequence of events. We will move forward to share our best current information, and other purchasers and the states will share other available information. Once there has been enough research and experience, a consensus-building body such as the NQF should set standards. But in the interim, we will not delay sharing our available information with the public. Additionally, we would like to point out that on page 8, GAO has mischaracterized the Abt validation report as a "final draft." This report is final, and CMS has been making full use of all appendices in our deliberations and measures selection. Risk Adjustment: On the particular issue of risk adjustment, we are well aware of the debates about methodology, and that thoughtful academics simply disagree. We are hopeful that the NQF consensus process will help to settle some of those issues in the near future, and we will continue to fund improvements and refinements to the measures over time. The CMS fully supports the methodology and adjustment created by our contractors. The selection of terminology to describe this adjustment to the consumer has been a dynamic process. The CMS agrees that the documents that were available to the GAO analysts required edits, and feedback from the GAO analyst was helpful with regards to making distinctions with the exclusions. These will now be defined as data adjustments (admission or selection exclusions), and clinical exclusions. There will be three levels of detail regarding risk adjustment available on the Nursing Home Compare Web site, ranging from the consumer to researchers. The choice to report two of the measures with, and without, the Facility Admission Profile (FAP) adjustment is in response to concerns raised during the pilot from facilities who feel they specialize in certain areas, such as pressure ulcers. The pressure ulcer and delirium quality measures were both equally valid and reliable with, and without, the FAP. The CMS has chosen to report these measures both ways to evaluate their usefulness and to allow facilities and consumers the additional information. Resident Assessment Instrument (RAI) and Minimum Data Set (MDS) Accuracy: The MDS data used to calculate the quality measures were demonstrated to he reliable as part of the Abt contract validation study conducted in 2001. The GAO's skepticism appears to be based on earlier MDS accuracy studies. The OIG report referenced (OEI-02-99-0040) discusses a field test OIG conducted between June and August of 1999. Of note, CMS comments that were included in the OIG report at that time questioned the methodology and OIG's interpretation of CMS documents used to perform the study. It is important to recognize that the MDS was used in prior years only for resident assessments. Automation of the current instrument did not occur until June I998, and it became the source document for the Prospective Payment System with the cost reporting cycle beginning July 1, 1998. This additional function of MDS likely improved its accuracy compared to the early implementation period. Over the three years since the field tests for these studies were performed, CMS has consistently worked to maintain and improve the RAI/MDS and its coding accuracy, in order to support our payment, regulatory, and quality systems. Our current efforts are multi-faceted and include: * A Fall 2002 major revision to the long Term Care Resident Assessment Instrument User's Manual, Version 2.0, including clarification of item-by-item instructions and submission requirements. * On-going train the trainer sessions with the State RAI coordinators. * Frequent interactions with the nursing home industry and other stakeholder organizations to provide updates and clarification regarding MDS coding. * A planned December 2002 satellite broadcast providing additional guidance on MDS coding specific to the quality measures. The target audience is nursing home MDS coordinators and QIOs. * Monitoring of more global MDS coding trends by the Data Accuracy and Verification (DAVe) contract. * Current efforts to develop the MDS 3.0 incorporating input from the DAVe contract, users, and subject matter experts. Consumer Reporting on the Web: The GAO critiques CMS's reporting format claiming the information was "neither consumer friendly nor presented in a format consistent with the data's limitations." Specifically GAO concludes the following: a) Consumers will be confused because scores on quality management can conflict with each other and with deficiency data. b) The public may confuse a high score as being a positive result when it is not (e.g. a high percentage of residents with pressure sores). Moreover, the GAO suggests that CMS abandon presenting exact scores for each nursing home and instead use a system that ranks nursing homes in ranges like that used by the State of Maryland. The GAO claims the current format "implies a precision in the data that are lacking." We have strong concerns over the methods GAO used to arrive at these conclusions, as they simply cite their "analysis" and offer no explanation of how their analysis was conducted. The CMS believes analysis of reporting templates should rely heavily on systematically conducted interviews with consumers and other stakeholders. The CMS uses these methods when developing reporting templates and language. This research and experience suggests the following in reply to GAO's conclusions. The CMS Web site offers consumers a great deal of information to use in making nursing home placements. Consumers are urged to choose the information that is important to them to both deal with the volume of available information, as well as to resolve apparently conflicting information. The CMS strongly urges consumers to visit nursing homes before making a final decision. To this end, CMS also provides a checklist consumers can use when visiting nursing homes. All of this adds up to CMS's main message, which is to use the quality measures as only one factor in the decision-making process. Consumers strongly agree with this proposition. In the Needs Assessment Report CMS shared with GAO, it is clearly stated in the Executive Summary that consumers are unlikely to use the quality measures in isolation. During interviews we used to develop the language describing the quality measures, consumers were shown examples of the graphs used on Nursing Home Compare. During this process, we did not observe consumers having trouble understanding that higher percentages of residents with pressure sores were a negative finding. Even so, CMS clearly states on each appropriate measure that lower scores are better. Only one quality measure, the percentage of residents who walk as well or better, reports higher measures as better. Overall, evidence suggests that consumers are not confused by the quality measures, and are very satisfied with the current display. In August 2002, CMS conducted an online satisfaction survey of randomly chosen users of Nursing Home Compare information from the six pilot states. The Nursing Home Compare and the quality measure displays scored extremely high on all satisfaction measures (see Table I). On each measure approximately 40% of respondents rated the Web site and display a 10, with as many as 45% scoring the clarity of the quality measure displays a 10. Similarly, the Web site and quality display received scores of 8 or higher from approximately 70% of all satisfaction measures. The survey results not only demonstrated a high level of satisfaction with the quality measure display, but they also suggested a strong demand for the quality information among Nursing Home Compare users. Over two-thirds (67%) of users strongly agreed that the information was valuable. Similarly, (64%) strongly agreed that it was the "information I was looking for." [Footnote 35] This represents a strong demand for this information that CMS needs to continue to build information to meet the demand. Table 1: Nursing Home Compare Quality Measures: Survey Question: Easy to Search For Information; 0[A]-4: 10% (66); 5-7: 20% (130); 8-9: 27% (176); 10, Very Much Agree: 41% (266); Don't Know: 2% (16); Total: 654. Survey Question: Information I was Looking For; 0[A]-4: 12% (80); 5-7: 21% (139); 8-9: 28% (181); 10, Very Much Agree: 37% (241); Don't Know: 2% (13); Total: 654. Survey Question: Information Displayed Clearly; 0[A]-4: 8% (51); 5-7: 17% (110); 8-9: 30% (193); 10, Very Much Agree: 45% (292); Don't Know: 1% (8); Total: 654. Survey Question: Information Easy to Understand; 0[A]-4: 9% (59); 5-7: 19% (122); 8-9: 28% (186); 10, Very Much Agree: 43% (278); Don't Know: 1% (9); Total: 654. Survey Question: Information was Valuable; 0[A]-4: 12% (75); 5-7: 19% (123); 8-9: 27% (175); 10, Very Much Agree: 41% (266); Don't Know: 2% (15); Total: 654. [A] Respondents were asked to rate the information from 0 "very much disagree," to 10 "very much agree." For analysis purposes, 8-I0 was considered "strongly agree." The percentages in this paragraph represent those that rated the items 8 or higher on a scale from 0 to 10. The CMS is seriously considering a categorization scheme like that used by the State of Maryland. The problem with such a scheme is that two nursing homes with scores that are not significantly different can be placed in two separate categories. One can receive a "high" score and the other an "average" score when they are not actually different from each other. In focus groups on other quality measures, consumers have told CMS that they interpret differences of a few percentage points as not being different at all. From a statistical point of view, the consumers are correct. It is possible that placing nursing homes into categories, as GAO suggests, will more strongly reinforce an unwarranted precision by suggesting differences between nursing homes that are not real. Urging consumers and other audiences to use quality information when making nursing home placements is essentially a health promotion activity. Successful health promotion activities evolve over time using local and national experience. To this end, CMS will continue to develop and improve tools for consumers and other audiences to use when making nursing home decisions. Program Description: In the second page of the GAO letter, there are mis-statements made in describing the NHQI program. First, GAO states that QIOs would be managing the media campaign intended to increase public awareness. The CMS central office manages the media campaign while QIOs engage in local promotional activities. Also, it is a mischaracterization to call the QIOs promotional activities simply a "media campaign." The QIOs are also engaged in partnership building and grass roots promotional activities designed to get quality information into the hands of consumers when they need it most. Second, and more importantly, the QIOs are working with more than 5 to 11 nursing homes in each pilot state. The QIOs arc working intensively with 6 to 11 nursing homes in each pilot state and promoting quality improvement activities for hundreds of other nursing homes through workshops and other activities. Evaluation: The CMS has completed a number of evaluation activities. These activities are summarized in the attached Interim Report. These findings were not available to GAO when they wrote the current draft of their report. While final reports are not yet available for sonic of these studies, CMS uses preliminary findings to steer the national implementation. In GAO's summation of CMS's evaluation design, the fourth research question has been mis-stated. The real question is whether the NHQI activities influenced nursing home quality improvement activities. The program monitoring and nursing home survey evaluation activities both suggest that the answer to this question is yes. The CMS's systematic program monitoring recorded that over half of the nursing homes in the pilot states requested quality improvement assistance from the QIOs. The survey of nursing homes found that 88% heard of the initiative and 77% said that the NHQI was somewhat, fairly, or very influential in the quality improvement changes they have made in the past six months. The GAO recommends that the national reporting of quality indicators be delayed until a more thorough evaluation of the pilot is completed. This recommendation represents a lack of GAO understanding of the pilot's purpose and the workings of health promotion activities. The CMS strongly agrees with GAO's statement that the public reporting of quality measures is a good idea. The CMS also understands that improving the related promotional activities will greatly benefit from regional and national experiences over time. All national health promotion activities (e.g. immunization, mammography, diabetes management, HIV prevention etc.) continue to learn and improve from models tested in multiple settings. Therefore, a national implementation should not be delayed. The CMS does have evidence that the promotional activities are starting to demonstrate success, and plans to replicate the successful activities in the national launch. Successful health promotion activities necessarily start with getting the attention of important audiences. As stated above, our experiences in the pilot suggest that we have gotten the attention of a very important audience: the nursing homes. Furthermore, this experience suggests the nursing homes are starting to act on this campaign. A program as complex as the NHQI has many implementation issues to consider. These issues include preparing multiple data sets for reporting; setting up mechanisms to allow thousands of nursing homes to preview their own data: and gaining input from stakeholders, such as industry and consumer organizations. These are not issues that necessitate a formal evaluation plan, instead they lead to experiential lessons learned. The pilot allowed CMS the experience to work through these issues before a national launch. The CMS is dedicated, however, to ongoing research to assess a variety of models that will help improve our promotion of nursing home quality information over time. We have attached an Executive Summary of our Interim Report on Evaluation Activities for the Nursing Home Quality Initiative that describes an evaluation plan for the Nursing Home Quality Initiative. Attachment: Report on Evaluation Activities For the Nursing Home Quality Initiative Pilot October 17, 2002: Executive Summary: Four different studies and monitoring activities were used to evaluate the Centers for Medicare & Medicaid Services' Nursing Home Quality Initiative (NHQI) Pilot. The major findings from this evaluation are listed below: The NHQI was successful in promoting quality improvement activities among nursing homes. * As a result of the NHQI, over half of the nursing homes (52%) in the six pilot states requested quality improvement technical assistance from the Quality Improvement Organizations (QIOs). * The vast majority of nursing homes (88%) had heard of the NHQI. * Over three-quarters of nursing homes (78%) reported making quality improvement changes during the NHQI pilot and 77% indicated that the NHQI was, in part, responsible for their decision to undertake these activities. The NHQI increased people's seeking of nursing home information. * Phone calls to I-800-Medicare concerning nursing home information more than doubled during the pilot roll-out and visits to Medicare.gov's nursing home information for the six pilot states increased ten fold. Users of the quality information on-line were highly satisfied. * Web users said the information was clear, easy to understand, easy to search and valuable. On a scale of 0 to I0, over 40% of web users scored the information a I0 on these dimensions and approximately 70% gave the information an 8 or higher. [End of section] Appendix III: Comments from the National Quality Forum: The National Quality Forum: 601 Thirteenth Street, NW: Suite 500 North: Washington, DC 20005: Phone 202-783-1300: Fax 202-783-3434: [hyperlink, http://www.qualityforum.org]: October 18, 2002: Kathryn G. Allen: Director: Health Care – Medicaid and Private Health Insurance Issues: United States General Accounting Office: Washington, DC 20548: Dear Ms. Allen: Thank you for the opportunity to comment on your proposed report entitled Nursing Homes: Public Reporting of Quality Indicators Has Merit, But National Implementation Is Premature (GAO-03-187). The National Quality Forum (NQF) unequivocally supports the Centers for Medicare and Medicaid Services' plans to publicly report quality of care nursing home data, and we would not support any undue delay in implementing their national reporting efforts. However, for the reasons noted below, we can see where there might be some value in a short-term postponement (e.g., three or four months) that would allow finalization of a consensus set of measures and more time for efforts to prepare the public on how to use and interpret the data. The NQF concurs that the advantages to delaying national reporting of nursing home quality indicators, until NQF finishes its work to achieve consensus on a set of measures for public reporting of nursing home quality, include the following: * Current research pertaining to validation of quality indicators performed by Abt Associates and the California Healthcare Foundation's California Consumer Information System project will inform the NQF's Consensus Development Process. Many NQF members commented that incorporating this additional information was necessary for them to consider the proposed measures. * Information from the experience with the pilot from CMS, Quality Improvement Organizations, consumer groups and others may provide additional important information to the NQF consensus process. * The NQF believes that ultimately achieving consensus on public reporting of nursing home measures will aid acceptance and use of the quality information and provide a roadmap for improvement in the future. The NQF Board of Directors and the "Nursing Homes" Steering Committee have identified presentation and public reporting format to be a critically important issue. The Steering Committee offered general, informal guidance to CMS during its meetings, including using positive or neutral wording of measures; exploring alternative ways of presenting information about differences among facilities; and ensuring that the presentation of data reflects meaningful differences on topics important to consumers. Beyond the informal guidance, the NQF is not currently engaged in assessing report format; it is beyond the scope of the requested work. If you have any questions, please contact me at 202-783-1300. Sincerely yours, Signed by: Reva Winkler MD MPH: Clinical Consultant: [End of section] Appendix IV: GAO Contact and Staff Acknowledgments: GAO Contact: Walter Ochinko, (202) 512-7157: Acknowledgments: The following staff made important contributions to this report: Laura Sutton Elsberg, Patricia A. Jones, Dean Mohs, Dae Park, Jonathan Ratner, Peter Schmidt, Paul M. Thomas, and Phyllis Thorburn. [End of section] Footnotes: [1] In June 2001, the agency‘s name was changed from the Health Care Financing Administration (HCFA) to CMS. In this report, we continue to refer to HCFA where our findings apply to the organizational structure and operations associated with that name. [2] [Hyperlink, http://www.Medicare.gov/NHCompare/home.asp]. [3] Under contract with CMS, 37 QIOs (formerly known as Peer Review Organizations) are responsible for determining the quality, effectiveness, and efficiency of health care services provided to Medicare beneficiaries in all 50 states and the District of Columbia. [4] NQF is a nonprofit organization created to develop and implement a national strategy for health care quality measurement and reporting. NQF has broad participation from government and private entities as well as all sectors of the healthcare industry. [5] Surveys must be conducted at each home on average every 12 months and no less than once every 15 months. [6] The quality indicators used in nursing home surveys were developed by the University of Wisconsin under a HCFA-funded contract. See Center for Health Systems Research and Analysis, Facility Guide for the Nursing Home Quality Indicators (Madison: University of Wisconsin-Madison: September 1999). Surveyors use the indicators to help select a preliminary sample of residents and preview information on the care provided to a home‘s residents prior to the on-site inspection. Prior to their introduction in 1999, selection of the sample relied on a listing of residents and their conditions maintained at the nursing home and on observation of residents made during a walk through of the facility. As a result of the quality indicators, the sample selection is more systematic and surveyors are better prepared to identify potential care problems. However, the quality indicators used during surveys were not developed for public reporting because they were viewed as providing an indication of a potential quality problem that required validation through an on-site survey. [7] MDS assessments are conducted for all nursing home residents within 14 days of admission and at quarterly and yearly intervals unless there is a significant change in condition. Medicare beneficiaries in a Medicare-covered stay are assessed on or before the 5th, 14th, 30th, 60th, and 90th day of their stays to determine if their Medicare coverage should continue. [8] The steering committee consists of 12 stakeholders representing health researchers, geriatricians, state survey agencies, state Medicaid directors, health systems, and others. [9] The NQF relies on a consensus process led by a steering committee that initially conducts an overall assessment in a particular area and gathers input from NQF members, nonmembers, and expert advisory panels. The steering committee then recommends a set of draft measures, indicators, or practices for review. Next, the draft recommendations are distributed for review and comment”first to NQF members and then to the general public. Following this open review period, a revised product is distributed to NQF members for a vote. The NQF Board of Directors must ultimately approve matters under consideration before the consensus process is complete. [10] Abt Associates, Inc., HRCA Research and Training Institute, and Brown University, Validation of Long-Term and Post-Acute Care Quality Indicators, final draft report prepared for CMS, Office of Clinical Standards and Quality, Aug. 2, 2002. [11] NQF indicated that it viewed its list as a starting point for a stronger, more robust set of future indicators. Because nursing homes include both medical care and social services, NQF believes that a core set of indicators should cover several other highly important areas in addition to clinical quality of care, including resident quality of life; measures of resident and family satisfaction; and the nursing home environment, such as food quality and number of residents per room. A March 2002 report prepared for CMS acknowledged that clinical indicators are less important to the public than issues such as facility cleanliness and a caring staff. See Barents Group of KPMG Consulting, Inc., Nursing Home Consumer Choice Campaign Needs Assessment Report (McLean, Va.: Mar. 14, 2002). [12] CMS explained that its decision to use facility-level adjustments was influenced by ’great stakeholder interest“ in how this new risk-adjustment methodology affected them. [13] NQF based it recommendation on the work of a Special Advisory Panel of three independent consultants who were asked to assist in resolving concerns about the technical complexity of Abt‘s risk adjustment approaches, particularly its proposed facility-level adjustments. Specifically, the April 2002 NQF consensus draft recommended priority funding for (1) research regarding the selection of appropriate risk factors; (2) comparisons of the different risk adjustment methodologies for nursing home performance data, as applied to each quality indicator; and (3) validation of different risk-adjustment methods. [14] See U.S. General Accounting Office, Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities, GAO-02-279 (Washington, D.C.: Feb. 15, 2002). [15] Validation analysis was incomplete for two additional indicators. [16] As noted earlier, Abt‘s sample may not be representative as only 50 percent of homes agreed to participate. [17] Abt Associates, Inc., Development and Testing of a Minimum Data Set Accuracy Verification Protocol, final report prepared for HCFA, Feb. 27, 2001. The authors of this study computed the combined error rate for individual facilities by weighting the error rates for Medicare and non-Medicare assessments using the proportion of Medicare (.32) to non-Medicare (.68) assessment items for the entire sample of 30 facilities. However, the proportion of Medicare to non-Medicare assessments varied across facilities. For example, in one facility there were more Medicare than non-Medicare assessments. We therefore recomputed facility error rates using the proportion of Medicare to non-Medicare MDS assessment items for each facility. [18] Abt did not report error rates for individual items at the facility level. [19] More recently, state survey agency officials in three pilot states told us that they are concerned that the public reporting of quality indicators may lead to underreporting of certain problem areas, such as pain management. [20] Abt did not provide error rates for individual items that are adjusted to reflect the extent of the differences in assessments conducted by Abt and the facility nurses. [21] GAO-02-279, pp. 16-18. [22] OIG used the term ’differences“ rather than errors because its methodology did not permit a specific determination as to why the differences occurred. See HHS Office of Inspector General, Nursing Home Resident Assessment: Quality of Care, OEI-02-99-00040 (Washington, D.C.: December 2000). In commenting on this report, CMS expressed reservations about the OIG‘s methodology and interpretation of CMS documents used to perform the study. The OIG had recommended that nursing homes be required to establish an ’audit trail“ to document support for certain MDS elements. CMS disagreed, noting that it did not expect all information in the MDS to be duplicated elsewhere in the medical record. We concur with the OIG‘s position that, given the use of MDS data in adjusting nursing home payments and producing quality indicators, documenting the basis for the MDS assessments in the medical record is critical to assessing their accuracy. See GAO-02-279. [23] On-site reviews focus on determining whether a resident‘s medical record supports the MDS assessment completed by the facility. If the MDS assessment is recent, the review may also include direct observation of the resident and interviews with nursing home staff who have recently evaluated or treated the resident. Off-site reviews of MDS data include examining trends in assessments across facilities to identify aberrant or inconsistent patterns. [24] Stakeholders that commented on NQF‘s April 2002 draft set of indicators suggested that the quality indicator scores be reported as the percentage of residents not having the particular characteristic measured by each indicator”e.g., reporting that 80 percent of residents were not restrained rather than reporting that 20 percent of residents were restrained. If quality indicators were reported this way, having a high score would be better than having a low score. CMS indicated that it had received similar comments but will not make any changes prior to the national rollout in November 2002. [25] Chronic-care quality indicator scores were not reported for nursing homes with fewer than 30 residents after excluding some residents, e.g., those with certain clinical characteristics or those with missing data necessary to calculate a score. Short-stay quality indicator scores were not reported for nursing homes with fewer than 20 residents after excluding some residents. [26] The largest number of highly positive or highly negative scores that any nursing home in the pilot states had was seven. [27] See U.S. General Accounting Office, Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit, GAO/HEHS-99-157 (Washington, D.C.: June 30, 1999). [28] See U.S. General Accounting Office, California Nursing Homes: Care Problems Persist Despite Federal and State Oversight, GAO/HEHS-98-202 (Washington, D.C.: July 27, 1998). [29] The hotline identified each of the six QIOs we called by its proprietary name”not by the term QIO or Quality Improvement Organization. For example, the QIO for Ohio is known as KePRO, while the QIO for Alaska, Idaho, and Washington is called Qualis Health. [30] A few QIOs did have a menu option for calls about the ’nursing home project.“ [31] CMS identifies nursing home deficiencies on its Nursing Home Compare Web site using numbers with 2 equivalent to potential for more than minimal harm and 3 equivalent to actual harm. [32] U.S. General Accounting Office, Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit, GAO/HEHS-99-157 (Washington D.C.: June 30, 1999). [33] CMS also plans to incorporate information from a contractor‘s study completed prior to the pilot to determine how it could better motivate consumers to use nursing home quality information to make better informed decisions. See Barents Group, Nursing Home Consumer Choice Campaign Needs Assessment Report. [34] In April and May 2002, the number of Web site ’hits“ for states in the pilot increased substantially during the week the pilot was announced and subsequently decreased, but they remained higher than before the launch of the pilot. [35] The percentages in this paragraph represent those that rated the items 8 or gigher on a scale of 0 to 10. [End of section] GAO‘s Mission: The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO‘s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO‘s Web site [hyperklink, http://www.gao.gov] contains abstracts and full text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as ’Today‘s Reports,“ on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to [hyperlink, http://www.gao.gov] and select ’Subscribe to daily E-mail alert for newly released products“ under the GAO Reports Order GAO Products heading. Order by Mail or Phone: The first copy of each printed report is free. Additional copies are $2 each. A check or money order should be made out to the Superintendent of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or more copies mailed to a single address are discounted 25 percent. Orders should be sent to: U.S. General Accounting Office: 441 G Street NW, Room LM: Washington, D.C. 20548: To order by Phone: Voice: (202) 512-6000: TDD: (202) 512-2537 Fax: (202) 512-6061 To Report Fraud, Waste, and Abuse in Federal Programs Contact: Web site: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]: E-mail: fraudnet@gao.gov: Automated answering system: (800) 424-5454 or (202) 512-7470: Public Affairs: Jeff Nelligan, managing director, NelliganJ@gao.gov: (202) 512-4800: U.S. General Accounting Office: 441 G Street NW, Room 7149: Washington, D.C. 20548:

The Justia Government Accountability Office site republishes public reports retrieved from the U.S. GAO These reports should not be considered official, and do not necessarily reflect the views of Justia.