The end-stage renal disease program has often been the vanguard of proposed changes in Medicare payment systems. Most visible amongst these in recent years is the creation of an ESRD value-based purchasing program called the Quality Incentive Program that included for the first time payment for performance. In this program, CMS withholds 2% of each dialysis session’s payment at the facility level and pays back up to this 2% if certain quality measures are met. These measures are usually created by the Centers for Medicare & Medicaid Services, which administers the QIP, through a technical expert panel. The measures are endorsed by the National Quality Forum, subject to public comment, and finally administered via existing CMS information systems.
Measures may include outcomes (such as the yearly average percent of patients at a given dialysis facility with hemoglobin over 12 g/dl) or process (reporting) measures. In the case of some measures, such as those involving anemia, this takes the form of a claims-based measure, as hemoglobin is reported on the claims form. In recent years, additional reporting measures have been added for which claims-based data is not available.
In calendar year 2012, there were three such measures, including the administration of the in center hemodialysis Consumer Assessment of Healthcare Providers and Systems survey (ICH CAHPS), data submission to the CDC’s National Healthcare Surveillance Network (NHSN), and meeting a defined minimum standard for mineral and bone disorder (MBD) testing. Given the systems limitations, CMS elected to use an annual facility attestation process through its CROWNWeb system for the reporting measures.
In contrast to other claims-based measures, this attestation produced a unique circumstance where individual dialysis providers performed in-house calculations, the results of which were used to determine their ability to attest positively or negatively for each of these measures. This nuance, inherent in the design of these particular QIP measures, created a significant opportunity for inconsistency. Claims-based measures use strictly defined business rules and published mathematical equations to ensure consistently by CMS in centrally converting claims data to outcome measures for use in the QIP.
Unfortunately, this lack of mathematical clarity was not applied to the reporting measures. For the first two of these reporting measures, namely the annual administration of the ICH CAHPS and the submission of data to NHSN, this was not an issue. Given the binary nature of these measures, very few if any calculations were required to attest to having met the requirement.
The third reporting measure, mineral and bone disorder testing, did require data calculations for attestation. It is the variable nature of how the narrative text, lacking defined measure specifications or final rules, were interpreted by the dialysis provider community that we write about today. This is a cautionary tale and illustrates the need to be as explicit as possible in the conversion of such descriptive text to mathematical equations that drive business rules and computers, and ultimately affect public perceptions of quality.
The mineral and bone disorder measure
For this particular measure, a narrative definition was provided in the Federal Register for payment year 2014 ESRD-QIP Final Rule (77 FR 67475). There the text states: “a facility treating at least 11 Medicare patients during the performance period can attest to meeting the requirements of the PY 2014 Mineral Metabolism reporting measure if it monitors on a monthly basis the serum calcium and serum phosphorus for at least 96 percent in total of all (i) in-center Medicare patients who have been treated at least seven times by the facility; and (ii) home hemodialysis Medicare patients for whom the facility submits a claim.”
The key phrase that could have benefited from a mathematical clarification is the following: “monitors on a monthly basis the serum calcium and serum phosphorus for at least 96 percent in total of all (i) in-center Medicare patients who have been treated at least seven times by the facility.”
The final rule did not contain an equation converting this text to math. Given that, there were two possible and very different interpretations of the phrase, each of which resulted in a different response with regards to the ability to attest to this QIP measure.
The first such interpretation (Method 1) can be characterized as the “percent of complete months” method. The key assumption is that the metric applies to the number of months where 96% of eligible patients in a facility were tested. Essentially, an interpretation that assumes a facility level metric is being described, consistent with other facility level quality metrics in ESRD.
The second interpretation (Method 2) can be described as the “percent of patient months meeting the definition” method. Here the denominator is assumed to be patient months of exposure, which becomes a patient-level metric.
The math that illustrates each of these differences is displayed below:
Method 1 (% of complete months method)
Standard: 96% rate for all months
For example, a given dialysis facility fell below 96% of patients tested in September but did test greater than 96% for all other months in a year.
So only 11 of 12 months tested at 96% rate, so the facility cannot attest to meeting the requirement.
QIP attestation in CROWNWeb would then be that the facility did not meet the requirements.
Method 2 (% of pt months method)
Standard: 96% average over entire year
For example, the same facility described in the method 1 example above tested 406 of its eligible 412 patients giving it a 99% testing rate for the year.
Under method 2, the same facility that could not successfully attest under method one, could now successfully attest to meeting the requirement under method 2.
Shortly following the final submission of QIP attestation into CROWNWeb, an informal survey of dialysis providers did indeed reveal that in the absence of specific mathematical guidance, some providers calculated compliance with this metric using method 1, while others used method 2. As illustrated by the example, the use of method 1 rather than method 2 can result in dramatic differences in the perception of quality for the same dialysis facility, or for providers with multiple dialysis centers.
To understand how much variance the use of method 1 vs. method 2 may have on a provider, we used our internal databases to test the sensitivity of method selection. To do so, we took a random sampling of 1,801 dialysis facilities and calculated their QIP scores for a given year using both methods.
- Using method 1, 1432/1801 (80%) facilities could successfully attest to having met the measure.
- Using method 2, 1698/1801 (94%) facilities could successfully attest to having met the measure.
Clearly the perception of quality, and the payment under QIP would be substantially different, not based on actual quality delivered, but on the choice of calculation method. In addition, comparative quality among providers would not be a true apples-to-apples comparison since some use method 1 and some method 2.
This example reinforces the need for CMS to be explicit and provide the business rules and mathematical equations needed for self-reported attestation measures on par with the clarity provided for measures that CMS calculates itself centrally. Failure to do so may result in an apple to oranges comparison of results depending on which method is used. Many of the comment letters recently submitted on the QIP sections of the proposed ESRD rule in the fall of 2013 have advocated the need for such clarity.
Failure to correct such ambiguity creates an administrative issue that undermines the goals of the QIP program. Such data, influenced by an inconsistent application of business rules, may mislead patients and payors about the true quality of care provided. Further in a P4P program, this variability of interpretation may also cost facilities money. The solution to this issue is simple. Providing the mathematical equation to do this calculation should be straightforward and within the regulatory authority of CMS to implement. And by doing so, CMS can enable the shared goal of using comparable data to drive improved outcomes and benefit patients.