Comparing Model Error Between a Standard Risk Adjustment Model and a Disease-Specific Risk Adjustment Model
The pursuit of value-based care as a consequence of health care reform laws such as the Affordable Care Act (ACA) and the Medicare Modernization Act has accelerated the use of risk adjustment. Payers, providers and health intervention vendors may underestimate the extent to which model error influences risk adjustment results. “Model error” is a measure of the difference between the predicted quantity and the actual quantity. These model error differences are caused by stochastic variance in the underlying health care quantities being compared. Model error is reduced with volume, but can be significant when the sample size is small or the population has large variations in costs.[1] In practice, model error can result in a positive savings calculation for a program that in fact has not generated savings, or a significantly higher or lower savings amount than the true (unobserved) savings. Risk adjustment and other forms of population standardization approaches such as propensity matching are applied to reduce population differences and increase comparability; however, residual model error after the application of risk adjustment models and propensity matching persists. In fact, these models bring their own inherent variations, which introduce additional sources of model error and could even increase the potential magnitude of aggregate model error. Risk corridors are frequently used to mitigate this error in practice, but many risk corridors may be insufficient to fully account for model error especially for small sample sizes, specific disease groups and populations with high cost variance.[2]