Blogs

CMS Hospital Quality Star Rating Program: Analysis and Commentary

By Brock Slabach posted 08-03-2016 04:21 PM

  

CMS released to the public on July 27, 2016 Overall Hospital Quality Star Ratings on the Hospital Compare website for short-stay acute hospitals in United States. This follows the trend of quality data transparency at CMS as part of their Value Based Purchasing (VBP) effort, for example Star Ratings for home health, nursing homes and dialysis. Of course, data transparency isn’t new. CMS has been posting on the Hospital Compare website quality data for individual hospitals for years. What is very different now is reducing 64 measures into a single score (stars) between 1 and 5. The analogy here is TripAdvisor.com. Reducing the complexity of hospital quality into a hotel star rating isn’t good for the public, nor is it helpful to hospitals, especially those in rural areas.

First, a little bit about the rating system itself. There are total of 64 measures contained within the categories listed below. Each category is weighted in terms of its effect on the overall rating. For example, mortality has seven measures in that category and it is weighted 22% of the overall score. By contrast, timeliness of care has 7 measures, but it is only weighted 4% of the overall star rating. The following table illustrates the total star rating program measurement system: 

Outcome Measures

Process of Care Measures

  • Mortality (N=7, 22% weight)
  • Safety of Care (N=8, 22% weight)
  • Readmissions (N=8, 22% weight)
  • Effectiveness of Care (N= 18, 4% weight)
  • Timeliness of Care (N=7 , 4% weight)
  • Patient Experience (N=11, 22% weight)
  • Efficient Use of Medical Imaging (N=5, 4% weight)

Based on this system of measurement, the star ratings were distributed according to the following table:

Star Rating

Number of hospitals (percent of hospitals rated)

One Star

133 (4%)

Two Star

723 (20 %)

Three Star

1771 (48%)

Four Star

934 (25%)

Five Star

102 (3%)

The total number of Hospitals in Hospital Compare Data Set was 4,599. 20% of the total number of hospitals or 941 didn’t receive a quality score because they didn’t have enough data to meet reporting thresholds. The majority of hospitals that didn’t meet the reporting threshold were CAHs (671). Some good news, for the 541 CAHs that had enough data to report, as a group these CAHs had an average star rating of 3.31, which compares favorably to the overall average of 2.99 for all hospitals.

Here are some of my thoughts in reaction to this program, if there are some ideas you have that I didn’t mention, please post a response to this post:

  1. Mortality data is adjusted in this system using the coding provided when claims were filed. The Case Mix Index (CMI), though not directly used, is a proxy for how well (or not) your coding of claims is being done. Since CAHs aren’t paid based on coding of claims (they’re paid on cost), it’s possible that CMI in CAHs are artificially lower than they should be. This would cause mortality data to have a larger gap between expected mortality and actual deaths. The correction would be in both PPS and CAHs to review your coding capacity to ensure that it is as accurate as can be.
  2. CAHs don’t participate in Medicare’s Hospital Readmission Rate Reduction Program. This has historically not been a focus of CAH operations in terms of incentive. Yet, for this group of hospital it is one of the metrics upon which it is being judged in the Star Rating system. CAHs should understand their readmission rates and work to reduce 30 day readmissions to hospital.
  3.  Will CMS continue to distribute data in a perfect bell-shaped curve like the above data represents? If so, any hospital that makes significant progress in improving its performance under this program could stay exactly where it is if all other hospitals improve at the same pace. At least on Trip Advisor a hotel’s rating is dependent only upon the scores that their customers directly give it.
  4. We are disappointed that CMS rolled out a program where 20% of the hospitals didn’t have enough data to meaningfully participate. Small rural providers should have a measurement system that revolves around their organization and on the services it actually provides. The National Quality Forum (NQF) produced a Rural Report in September, 2015 which outlines a path forward for CMS to ensure that all hospitals can participate meaningfully regardless of location or size.
  5. We are concerned that safety-net hospitals on average scored 2.9 stars whereas non-safety-net hospitals averaged 3.09. Many rural hospitals are safety-net facilities serving underserved vulnerable populations. More glaring is comparing Disproportionate Share Hospitals (DSH) with non-DSH hospitals: 2.92 for DSH and 3.47 for non-DSH. NRHA supports data in the Star Rating program be adjusted for socio-demographic and economic status (SES).
  6. From our friends at Stratis Health and the Rural Quality Improvement Technical Assistance program and their report on this topic, “Many of the measures included in the ranking methodology are specific to a particular diagnosis or procedure. Small and rural hospitals often don’t have enough volume of any specific diagnosis to have measures calculated, or the procedures measured are not part of the services they provide. Low-volume is not a statement about the quality of care.” 

Finally, NRHA supports HR 5972 sponsored by Reps. Jim Renacci (R-OH) and Kathleen Rice (D-NY) which delays for at least one year the continuation of the hospital star rating program. Furthermore, in April, 2016, a bipartisan majority of both houses of Congress urged CMS to delay the release of the star ratings to allow time to work with the hospital field to ensure the ratings are fair and reliable.

Let's work together to devise a system that is meaningful to the public upon which to make purchasing decisions and fair to rural hospitals in their efforts to provide high-quality, low-cost care for their communities. 

0 comments
572 views

Permalink