Skip to main content

Comparing the Medicaid Prospective Drug Utilization Review Program Cost-Savings Methods Used by State Agencies in 2015 and 2016

February 2019 Vol 12, No 1 - Regulatory
Sergio I. Prada, MPA, PhD; Johan S. Loaiza, BS
Dr Prada is Professor of Economics and Senior Research Associate, Universidad Icesi, Centro PROESA, Calle, Cali, Colombia; Mr Loaiza is Research Assistant, Universidad Icesi, Centro PROESA.
Download PDF
Abstract

BACKGROUND: The Medicaid Drug Utilization Review (DUR) program is a 2-phase process conducted by Medicaid state agencies. The first phase is a prospective DUR process and involves electronically monitoring prescription drug claims to identify prescription-related problems, such as therapeutic duplication, contraindications, incorrect dosage, or duration of treatment. The second phase is a retrospective DUR involving ongoing, periodic examinations of claims data to identify patterns of fraud, abuse, underutilization, drug–drug interaction, and medically unnecessary care, and implement corrective actions when needed. The Centers for Medicare & Medicaid Services requires each state to measure the prescription drug cost-savings generated from its DUR programs annually, but it provides no methodology for doing so. An earlier article compared the methodologies used by states to measure cost-savings in their retrospective DUR program in fiscal years 2014 and 2015.

OBJECTIVE: To describe and synthesize the methodologies used by states to measure cost-savings using their Medicaid prospective DUR program in federal fiscal years 2015 and 2016.

METHODS: For each state, we downloaded from Medicaid’s website the cost-savings methodologies included in the Medicaid DUR 2015 and 2016 reports. We then reviewed and synthesized the reports. Methods described by the states were classified into a unique group based on the methodology used, except for Arkansas and Connecticut, which were classified in more than 1 category for the same period.

RESULTS: Currently, 3 different methodologies are being used by states. In 2015 and 2016, the most common methodology used (by 18 states) was the calculation of total claim rejections and subtracting claim resubmissions at the amount actually paid. The comparisons of DUR program cost-savings among states are unreliable, because the states lack a common methodology in the way they measure their performance.

CONCLUSIONS: Considering the lack of methodologic consistency among states in measuring the savings in the Medicaid DUR program shown in this analysis, the federal government must lead an effort to define a unique methodology to measure cost-savings in its entire DUR program. This will help to improve the measure of savings among states and understand how this program is performing in that matter.

Key Words: cost-avoidance, cost-comparison, cost-savings, Medicaid Drug Utilization Review program, methodology transparency, prospective review, retrospective review

Am Health Drug Benefits.
2019;12(1):7-12
www.AHDBonline.com

Manuscript received May 11, 2018
Accepted in final form October 15, 2018

Disclosures are at end of text

The Medicaid Drug Utilization Review (DUR) program conducted by Medicaid state agencies promotes patient safety through state-administered drug utilization management tools and systems in a 2-phase process.1 The first phase of the program is a prospective DUR, which involves electronically monitoring prescription drug claims to identify problems such as therapeutic duplication and drug–disease contraindications. The second phase is a retrospective DUR, which involves ongoing and periodic examination of claims data to identify patterns of fraud, abuse, overuse, or medically unnecessary care, and implements corrective action when needed.

With the enactment of the Omnibus Budget Reconciliation Act of 1990, states were mandated to implement DUR programs by January 1993. Each state Medicaid agency is required to submit an annual report on the operation of these programs, including a calculation of the cost-savings related to the operation of each program.2

The prospective DUR tool and systems assess each prescription for an individual Medicaid patient before the medication is dispensed to identify drug-related problems.3 If an alert is triggered on the submission of a drug claim, the pharmacist must make the appropriate response to the alert, which is captured electronically.

Appropriate actions include discontinuing unnecessary prescriptions, reducing quantities of prescribed medications, switching to safer drug therapies, or adding a therapy recommended in published (evidence-based) guidelines from an expert panel. Otherwise, the pharmacist can override the alert based on his or her best professional judgment or after consultation with the prescribing physician. When a claim is denied as a result of a prospective edit, there may be a “replacement” or a “substitute” claim. Of note, the prospective DUR occurs before any drug claim is entered into the process of submission for Medicaid reimbursement.

In 1992, the US Health Care Financing Administration awarded 2 cooperative agreements for demonstrations of prospective DUR programs. In particular, Iowa tested an online prospective DUR system, and an evaluation of this program was published in 1999.4 The cost-savings were estimated by using multivariate regression analyses to compare the total Medicaid drug payment and the individual total costs for 8 drug classes between pharmacies that participated in the demonstration and pharmacies that did not (ie, the control group).4 Multivariate regressions included covariates such as age, sex, race, and Medicaid eligibility.

Besides the United States, other countries have DUR programs. An evaluation of a pilot program on the real-time DUR system in South Korea was published in 2013.5 The savings were calculated by subtracting the costs of drugs that were actually prescribed or dispensed from the cost of drugs that were initially entered by clinics or pharmacies when sending data to the Health Insurance Review & Assessment Service. If the prescription was changed after a review, this was reflected in the drug expenditure changes.

In India, drug utilization studies have been done at the provider level, not as a national program.6 Prospective studies were conducted in different hospitals and in periods of less than 1 year to investigate the use of antidiabetic, antihypertensive, antiepileptic, nonsteroidal anti-inflammatory, and antibiotic drugs, and to identify whether such prescription patterns are appropriate in accordance with international guidelines.6 Detailed information about the avoided costs calculation method was not provided.

In the United Kingdom, among the main interventions introduced to control medicines wastage are medicines use reviews, which can generate significant savings in the number and value of medicines dispensed when conducted to a well-defined standard.7 For example, a small-scale study in Leeds, England, in 2000, showed an average of £4.72 reduction in the net cost of drugs per patient in 28 days.8

A study conducted in 2003 and 2004 by Aston University in Birmingham, England, showed that medicines use reviews could reduce the number of repeat medicines ordered by more than 20%, while keeping the patient on a 3-month repeat prescription.9 Similarly, the United Kingdom’s National Health Service Tayside piloted medicines reviews and patient awareness and education measures at 8 general practices in 2005, and reported measurable improvements in wastage that were independent of the prescription’s length.7

However, in 2007, the National Audit Office concluded that the take-up of medicines use reviews around the United Kingdom had been lower than expected, and that there were information problems in ensuring that pharmacists had accurate records of the patient’s repeat prescriptions, and that they were able to convey their reports back to the general practitioner.10 The detailed calculation methodology in medicines use reviews was not reported in either case.

Finally, an international review of evaluative studies of policies to control expenditures on pharmaceuticals by influencing the behavior of prescribers showed that educational interventions, including the DUR program, can lower pharmaceutical utilization and expenditures when the focus of intervention is on cost-effectiveness information, but that the changes are likely to be modest.11 Only randomized controlled trials and rigorous quasi-experimental designs (ie, interrupted time-series and controlled before-and-after studies) were eligible for review, but information on the cost-savings methodology used in DUR programs was not provided.11

A previously published review of the Medicaid retrospective DUR program reports submitted by each state Medicaid agency to the Centers for Medicare & Medicaid Services (CMS) in 2014 and 2015 showed a remarkable lack of consistency in cost-savings methodology.12

The goal of this current follow-up article is to review whether that same lack of consistency is also found in the Medicaid prospective DUR program. In doing so, this article also provides a brief summary of the current methods used by states in 2015 and 2016 for cost-savings estimates in these programs.

Methods

For each state, the cost-savings methodology was downloaded from the Medicaid website from the DUR 2015 and DUR 2016 reports to CMS.13,14 We downloaded a total of 100 reports, with 2 for each state and 1 for each year; however, reports from Arizona were unavailable, because almost all its Medicaid program beneficiaries are enrolled in managed care organizations. We then reviewed the documents to extract the methodology used by each state to estimate the prospective cost-savings methodology for the DUR program.

Next, we grouped the methodologies into 3 categories, using criteria such as the inclusion of reversed and denied drug prescription claims in the calculations and the aggregation criterion of the allowable payments of the claims. Within each group, we identified additional refinements, such as the disaggregation of cost-savings by type of alert, the elimination of duplicate claims, the exclusion of delayed costs, and the use of weights.

All the states were classified into a unique group based on the methodology used, except Arkansas and Connecticut, which were classified in more than 1 category for the same period (Table 1).13,14

Table 1

In Arkansas in federal fiscal year 2015, the drug-dispensing cost-savings were estimated by using a combination of 2 methodologies. In Connecticut, the 2015 and 2016 reports state that either of 2 methodologies could have been used in estimating the cost-savings.13,14

Results

In 2015, 39 of the 50 states and the District of Columbia reported having a prospective DUR program, 10 reported using other DUR programs (eg, the retrospective DUR), 1 reported no DUR program, and 1 state’s (ie, Arizona) report was unavailable.13

In 2016, 40 states reported having a prospective DUR program and, as in the previous year, 10 states reported using other DUR programs (ie, the retrospective DUR); 1 state’s (ie, Arizona) report was unavailable.14 The increase from 39 to 40 states reporting savings as a result of the prospective DUR program was explained by a reclassification of cost-savings in the state of Maine.14

In 2016, the total estimated savings by preferred drug list or prior authorization program was categorized as resulting from its prospective DUR program, whereas in 2015, the same program was classified in the “other programs” category.13,14

In 2015, of the 39 states that reported cost-savings as a result of having a prospective DUR program, 31 shared details regarding the methodology used (27 states reported full information) and 8 states did not. In 2016, 33 states shared details of their methodologies (27 states reported full information) and 7 did not.13,14

After reviewing the 65 prospective DUR reports (Table 1) that included methodologic details, it was evident that 3 different methods were used to estimate cost-savings or cost-avoidance. These include (1) the total rejections subtracting resubmissions at the amount actually paid; (2) the total rejections without subtracting resubmissions at the average amount paid; and (3) the total rejections without subtracting resubmissions at the amount actually paid. The order in which the methods used by the states are shown in Table 1 represents the frequency of their use and does not imply methodologic rigorousness.13,14

In addition, we defined a fourth category to account for states that reported savings resulting from the prior authorizations and the preferred drugs list programs as part of the prospective DUR program. In strict sense, these are prospective programs because they limit prescribers in what they can prescribe; however, some states classified savings from such programs in the “other costs avoided” category.13,14

Table 1 provides details on which method each state used, by fiscal year. Most states used the same method in 2015 and in 2016.13,14 The state of Arkansas used a combination of the total rejections subtracting resubmissions at the amount actually paid and the total rejections without subtracting resubmissions at the average amount paid methodologies in 2015, but subsequently used only the former prospective DUR methodology in 2016. Furthermore, Connecticut reported calculating the total rejections without subtracting resubmissions at the average amount paid and at the amount actually paid.13,14

As noted before, Maine did not inform any cost-savings through a prospective DUR program in 2015, despite reporting the same items in the 2016 DUR report. Therefore, Maine was excluded from the list of states using a prospective DUR program in 2015, but it was included in the prior authorizations and preferred drugs list analysis group for 2016.13,14

Table 2 shows the cost-savings estimation methodologies that were used by states,13,14 as discussed below.

Table 2

Method 1: Total Rejections Subtracting Resubmissions at the Amount Actually Paid

In this methodology, the prospective DUR cost-avoidance calculation requires identifying claims with prospective DUR messages that were either denied or reversed and resubmitted or claims that were denied or reversed and not resubmitted. Some states assume the nonresponse by the pharmacist to a soft alert as a denied claim (eg, Arkansas).

The prospective DUR total cost-avoidance is equal to the sum of the paid claims cost-avoidance and the denied claims cost-avoidance. The sum of the paid claims cost-avoidance is calculated by taking the paid dollar amount of claims with a prospective DUR message that was paid but was subsequently reversed, and subtracting the paid amount for the claims resubmitted.

Similarly, the denied claims cost-avoidance is calculated by taking the submitted dollar value of the claims that were initially denied and had a prospective DUR message, and subtracting any claims that were then resubmitted. In other words, the cost-avoidance is calculated as the difference between the allowable payment amounts of the denied and reversed claims less the allowable payment amounts of the resubmitted claims, which could have been higher or lower than the original claims.

Most states did not report a time limit for the resubmission of denied or reversed claims, except for Florida, Michigan, New Jersey, and New Mexico. Florida and Michigan established the validity of the resubmitted claims within the following 72 hours, in case they were denied, and within the same calendar month for reversed claims. New Jersey established a 60-day period after the date of denial for which no future paid claims were identified. In New Mexico, a claim was counted as reversed only if it had been reversed within 24 hours (a same-day reversal). Moreover, Arkansas established a 7-day time frame for pharmacists to respond to a prospective DUR alert; otherwise the claim was denied, and no program funds were spent.

In 2015 and 2016, Louisiana identified the cost-savings associated with the rejected submissions resulting from early refills as a deferred cost. In this way, the cost-savings were calculated in proportion to the number of days between the date when the claim was initially denied and the date when the claim was subsequently paid. Likewise, Maryland, New Mexico, and Minnesota used multipliers to estimate more accurately the avoided costs associated with early refill alerts, because they were interpreted primarily as delayed costs. On average, only between 15% and 25% of early refill claims were assumed to be cost-saving.

When a claim is denied because of a prospective edit, there may be a replacement or a substitute claim. To look for possible future replacement or substitutes, some states create a unique identifier for the claim that is denied, using information on the patient and the American Hospital Formulary Service Pharmacologic-Therapeutic Classification of the denied drug. Each denied claim is then compared and matched with subsequent paid claims based on the unique identification number. When a match is found, the denied claim is no longer considered for savings calculation. For a detailed example of this methodology, please refer to the 2016 Arkansas Medicaid DUR report.1

Some states (ie, New Jersey, Louisiana, Maryland, New Mexico, and Ohio) reported savings using the total rejections subtracting resubmissions at the amount actually paid methodology at the alert (eg, drug–drug interaction) and drug levels. Similarly, in most of the state reports, the savings are estimated on an annual basis, but some states reported these figures monthly (ie, Virginia) or quarterly (ie, Georgia and New Jersey).

States using this methodology saved more than $729 million in 2015 and $809 million in 2016; the average cost-savings per state was $43 million and $48 million in the first and second years, respectively. Florida was not included, because of a dramatic decrease in savings from 2015 to 2016 (from $1.1 billion to $300 million).13,14

Method 2: Total Rejections without Subtracting Resubmissions at the Average Amount Paid

A second methodology used in years 2015 and 2016 to calculate cost-savings through the prospective DUR program is the multiplication between the numbers of rejected and no response claims with DUR edits and the average amount paid per prescription. The average is calculated as the ratio between the total paid amount for drug requests and the number of paid claims.13,14

In a slight variation of the methodology, Wyoming calculates savings by adding the total amount paid for reversed claims to the denied amount. However, Texas performs a disaggregated analysis between the denied requests, with and without substitute therapies, within 7 days of the original denial for the same drug category; with substitute therapies, the final calculation includes an adjustment that is equal to the reimbursement amount of the substitute therapy.

Some states (ie, California, Indiana, Massachusetts, and Oklahoma) assumed that a percentage of all cancellations and nonresponses were duplicate edits, so the savings amount that was previously calculated was subsequently adjusted down by multiplying by a number between 0 and 1, depending on the specific percentage defined by each state. In Texas, duplicates were defined as claims with the same client identification and drug (ie, generic code number) within 7 days of the initial denied request. The duplicates are excluded to calculate the number of unique denials before being multiplied by the average paid amount.

Likewise, some states use an additional adjustment down of the estimated costs for early refill denied claims, under the assumption that the larger proportion of the costs associated with these denials were delayed costs rather than avoided costs (ie, they should be covered in the future once the time limit to refill was reached).

As in the previous methodology, the cost-savings are reported by alert type and by drug with a similar description, strength, and route of administration. Also, most states report savings on an annual basis and follow what is required by CMS. The states using this methodology saved more than $953 million in 2015 and $811 million in 2016; the average cost-savings per state was $105 million and $101 million in the first and second years, respectively.13,14

For a detailed calculation used by any of the states using the methodology, please refer to the 2016 Medicaid DUR reports for California and Texas.14

Method 3: Total Rejections without Subtracting Resubmissions at the Amount Actually Paid

The third methodology is the simple sum of the amounts paid per claims, either reversed or denied (which includes no responses in some cases), that were associated with a DUR rejection. Unlike the previous 2 methodologies, no adjustments were made for duplicates or early refills, thereby assuming that 100% of the denials and nonresponses generated savings. Savings are reported by the type of alert on an annual basis.13,14

States using this methodology saved more than $657 million in 2015 and $602 million in 2016; the average cost-savings per state was $109 million and $100 million in the first and second years, respectively.13,14

For a detailed numeric example, see the 2016 Medicaid DUR report for Nevada.14

Discussion

In 2016, 40 states reported a Medicaid prospective DUR program evaluation; of these states, 33 reported information on the method used. In most states, the estimation was most frequently done by private third-party companies. According to CMS’s consolidated report, states saved an average of $52.9 million total, which is almost 50 times more than was reported in the retrospective DUR program ($1,247,960).14

However, as shown in this article, at least 3 different methodologies are used by states across the country, thus making comparisons and descriptive statistics unreliable. In addition, some states reported their methods vaguely or not at all. The analysis presented here is limited by the amount of information available to the public on Medicaid’s website.

According to the per-state cost-savings (total and average), method 1 penalizes more savings estimates per state, because it subtracts resubmissions, whereas methods 2 and 3 bring higher savings per state because no subtraction is performed. Using the average or actual amount paid makes less of a difference in terms of these methodologies.

Because of the lack of a common methodology used in measuring cost-savings in the Medicaid DUR program, the reported savings of 18% in 201614 resulting from the prospective and retrospective DUR programs is not accurate, because it is derived from a mix of different methods.

Conclusions

Previous research showed that there is great variation among states in the methods they use to measure how the Medicaid retrospective DUR program assesses its savings. The current study shows that this variation also applies to the prospective DUR program. Because of the lack of a common methodology, any potential savings in the DUR program reported by states may be inaccurate, because of the use of mixed methodology. We therefore suggest that an effort be made to define a common methodology by the federal government to improve how the program is measured and understand how the program is performing.

This is not an easy task, because decisions are made at the state level, but CMS may offer technical guidance, at a low cost. One way would be for CMS to convene the different organizations that perform the DUR program savings estimates for each state, and assess what could be a common methodology, given the information available. Another way could be to request that states report their savings using the most frequently used method in other states, as well as their chosen method, in the event that those 2 methods are not the same.

Author Disclosure Statement

Dr Prada and Mr Loaiza have no conflicts of interest to report.

Dr Prada is Professor of Economics and Senior Research Associate, Universidad Icesi, Centro PROESA, Calle, Cali, Colombia; Mr Loaiza is Research Assistant, Universidad Icesi, Centro PROESA.

References

  1. Centers for Medicare & Medicaid Services. Drug utilization review. www.medicaid.gov/medicaid/prescription-drugs/drug-utilization-review/index.html. Accessed January 30, 2018.
  2. Fulda TR, Lyles A, Pugh MC, Christensen DB. Current status of prospective drug utilization review. J Manag Care Pharm. 2004;10:433-441.
  3. Peterson AM, Chan V, Wilson MD. Chapter 8: Drug utilization review strategies. In: Navarro RP, ed. Managed Care Pharmacy Practice. 2nd ed. Sudbury, MA: Jones & Bartlett Publishers; 2009:215-231.
  4. Kidder D, Bae J. Evaluation results from prospective drug utilization review: Medicaid demonstrations. Health Care Financ Rev. 1999;20:107-118.
  5. Heo JH, Suh DC, Kim S, Lee EK. Evaluation of the pilot program on the real-time drug utilization review system in South Korea. Int J Med Inform. 2013;82:987-995.
  6. Jain S, Upadhyaya P, Goyal J, et al. A systematic review of prescription pattern monitoring studies and their effectiveness in promoting rational use of medicines. Perspect Clin Res. 2015;6:86-90.
  7. White KG. UK interventions to control medicines wastage: a critical review. Int J Pharm Pract. 2010;18:131-140.
  8. Zermansky AG, Petty DR, Raynor DK, et al. Randomised controlled trial of clinical medication review by a pharmacist of elderly patients receiving repeat prescriptions in general practice. BMJ. 2001;323:1340-1343.
  9. Jesson J, Pocock R, Wilson K. Reducing medicines waste in the community. Primary Health Care Res Dev. 2005;6:117-124.
  10. National Audit Office. Prescribing Costs in Primary Care. Report by the Comptroller and Auditor General. HC 454 session 2006-2007. May 18, 2007. www.nao.org.uk/wp-content/uploads/2007/05/0607454.pdf. Accessed January 17, 2019.
  11. Lee IH, Bloor K, Hewitt C, Maynard A. International experience in controlling pharmaceutical expenditure: influencing patients and providers and regulating industry – a systematic review. J Health Serv Res Policy. 2015;20:52-59.
  12. Prada SI. Comparing the Medicaid retrospective Drug Utilization Review program cost-savings methods used by state agencies. Am Health Drug Benefits. 2017;10(9):477-482.
  13. Centers for Medicare & Medicaid Services; Center for Medicaid & CHIP Services. Medicaid Drug Utilization Review state comparison/summary report FFY 2015 annual report: prescription drug fee-for-service programs. December 2016. www.medicaid.gov/medicaid-chip-program-information/by-topics/prescription-drugs/downloads/2015-dur-summary-report.pdf. Accessed January 30, 2017.
  14. Centers for Medicare & Medicaid Services; Center for Medicaid & CHIP Services. Medicaid Drug Utilization Review state comparison/summary report FFY 2016 annual report: prescription drug fee-for-service programs. October 2017. www.medicaid.gov/medicaid-chip-program-information/by-topics/prescription-drugs/downloads/2016-dur-summary-report.pdf. Accessed January 30, 2017.
Related Items
Evolution of the Medicare Part D Medication Therapy Management Program from Inception in 2006 to the Present
Cori Gray, PharmD, Catherine E. Cooke, PharmD, MS (PHSR), BCPS, PAHM, Nicole Brandt, PharmD, MBA, BCPP, BCGP, FASCP
September 2019 Vol 12, No 5 published on September 17, 2019 in Regulatory, Review Article
Trends in Utilization, Spending, and Prices of Smoking-Cessation Medications in Medicaid Programs: 25 Years Empirical Data Analysis, 1991-2015
Xiaomeng Yue, BPharm, MS, Jeff Jianfei Guo, BPharm, PhD, Patricia R. Wigle, PharmD
September 2018 Vol 11, No 6 published on October 15, 2018 in Original Research, Regulatory
Comparing the Medicaid Retrospective Drug Utilization Review Program Cost-Savings Methods Used by State Agencies
Sergio I. Prada, MPA, PhD
December 2017 Vol 10, No 9 published on January 3, 2018 in Regulatory
The Impact of Policy and Politics on Health and Drug Pricing
Jessica Miller
December 2016 Vol 9, Special Issue: Payers’ Perspectives In Oncology: AVBCC 2016 Highlights published on December 28, 2016 in Regulatory
Examining the Value of Subsidies of Health Plans and Cost-Sharing for Prescription Drugs in the Health Insurance Marketplace
Surachat Ngorsuraches, PhD, Jane R. Mort, PharmD
October 2016 Vol 9, No 7 published on October 18, 2016 in Regulatory
Last modified: August 30, 2021